This article has multiple issues. Please help talk page. (Learn how and when to remove these template messages) or discuss these issues on the(Learn how and when to remove this template message)
|Developer(s)||Apache Software Foundation|
|Initial release||December 10, 2011|
2.8.0 / March 22, 2017
|Type||Distributed file system|
|License||Apache License 2.0|
Apache Hadoop ( ) is an open-source software framework used for distributed storage and processing of dataset of big data using the MapReduce programming model. It consists of computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.
The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality, where nodes manipulate the data they have access to. This allows the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.
The base Apache Hadoop framework is composed of the following modules:
The term Hadoop has come to refer not just to the aforementioned base modules and sub-modules, but also to the ecosystem, or collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper, Cloudera Impala, Apache Flume, Apache Sqoop, Apache Oozie, and Apache Storm.
The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell scripts. Though MapReduce Java code is common, any programming language can be used with "Hadoop Streaming" to implement the "map" and "reduce" parts of the user's program. Other projects in the Hadoop ecosystem expose richer user interfaces.
The genesis of Hadoop was the "Google File System" paper that was published in October 2003. This paper spawned another one from Google - "MapReduce: Simplified Data Processing on Large Clusters". Development started on the Apache Nutch project, but was moved to the new Hadoop subproject in January 2006.Doug Cutting, who was working at Yahoo! at the time, named it after his son's toy elephant. The initial code that was factored out of Nutch consisted of about 5,000 lines of code for HDFS and about 6,000 lines of code for MapReduce.
The first committer to add to the Hadoop project was Owen O'Malley (in March 2006); Hadoop 0.1.0 was released in April 2006. It continues to evolve through the many contributions that are being made to the project.
|2003||October||Google File System paper released|||
|2004||December||MapReduce: Simplified Data Processing on Large Clusters|||
|2006||January||Hadoop subproject created with mailing lists, jira, and wiki|||
|2006||January||Hadoop is born from Nutch 197|||
|2006||February||NDFS+ MapReduce moved out of Apache Nutch to create Hadoop|||
|2006||February||Owen O'Malley's first patch goes into Hadoop|||
|2006||February||Hadoop is named after Cutting's son's yellow plush toy|||
|2006||April||Hadoop 0.1.0 released|||
|2006||April||Hadoop sorts 1.8 TB on 188 nodes in 47.9 hours|||
|2006||May||Yahoo deploys 300 machine Hadoop cluster|||
|2006||October||Yahoo Hadoop cluster reaches 600 machines|||
|2007||April||Yahoo runs two clusters of 1,000 machines|||
|2007||June||Only three companies on "Powered by Hadoop Page"|||
|2007||October||First release of Hadoop that includes HBase|||
|2007||October||Yahoo Labs creates Pig, and donates it to the ASF|||
|2008||January||YARN JIRA opened||Yarn Jira (Mapreduce 279)|
|2008||January||20 companies on "Powered by Hadoop Page"|||
|2008||February||Yahoo moves its web index onto Hadoop|||
|2008||February||Yahoo! production search index generated by a 10,000-core Hadoop cluster|||
|2008||March||First Hadoop Summit|||
|2008||April||Hadoop world record fastest system to sort a terabyte of data. Running on a 910-node cluster, Hadoop sorted one terabyte in 209 seconds|||
|2008||May||Hadoop wins TeraByte Sort (World Record sortbenchmark.org)|||
|2008||July||Hadoop wins Terabyte Sort Benchmark|||
|2008||October||Loading 10 TB/day in Yahoo clusters|||
|2008||October||Cloudera, Hadoop distributor is founded|||
|2008||November||Google MapReduce implementation sorted one terabyte in 68 seconds|||
|2009||March||Yahoo runs 17 clusters with 24,000 machines|||
|2009||April||Hadoop sorts a petabyte|||
|2009||May||Yahoo! used Hadoop to sort one terabyte in 62 seconds|||
|2009||June||Second Hadoop Summit|||
|2009||July||Hadoop Core is renamed Hadoop Common|||
|2009||July||MapR, Hadoop distributor founded|||
|2009||July||HDFS now a separate subproject|||
|2009||July||MapReduce now a separate subproject|||
|2010||January||Kerberos support added to Hadoop|||
|2010||May||Apache HBase Graduates|||
|2010||June||Third Hadoop Summit|||
|2010||June||Yahoo 4,000 nodes/70 petabytes|||
|2010||June||Facebook 2,300 clusters/40 petabytes|||
|2010||September||Apache Hive Graduates|||
|2010||September||Apache Pig Graduates|||
|2011||January||Apache Zookeeper Graduates|||
|2011||January||Facebook, LinkedIn, eBay and IBM collectively contribute 200,000 lines of code|||
|2011||March||Apache Hadoop takes top prize at Media Guardian Innovation Awards|||
|2011||June||Rob Beardon and Eric Badleschieler spin out Hortonworks out of Yahoo.|||
|2011||June||Yahoo has 42K Hadoop nodes and hundreds of petabytes of storage|||
|2011||June||Third Annual Hadoop Summit (1,700 attendees)|||
|2011||October||Debate over which company had contributed more to Hadoop.|||
|2012||January||Hadoop community moves to separate from MapReduce and replace with YARN|||
|2012||June||San Jose Hadoop Summit (2,100 attendees)|||
|2012||November||Apache Hadoop 1.0 Available|||
|2013||March||Hadoop Summit - Amsterdam (500 attendees)|||
|2013||March||YARN deployed in production at Yahoo|||
|2013||June||San Jose Hadoop Summit (2,700 attendees)|||
|2013||October||Apache Hadoop 2.2 Available|||
|2014||February||Apache Hadoop 2.3 Available|||
|2014||February||Apache Spark top Level Apache Project|||
|2014||April||Hadoop summit Amsterdam (750 attendees)|||
|2014||June||Apache Hadoop 2.4 Available|||
|2014||June||San Jose Hadoop Summit (3,200 attendees)|||
|2014||August||Apache Hadoop 2.5 Available|||
|2014||November||Apache Hadoop 2.6 Available|||
|2015||April||Hadoop Summit Europe|||
|2015||June||Apache Hadoop 2.7 Available|||
|2017||March||Apache Hadoop 2.8 Available|||
Hadoop consists of the Hadoop Common package, which provides file system and operating system level abstractions, a MapReduce engine (either MapReduce/MR1 or YARN/MR2) and the Hadoop Distributed File System (HDFS). The Hadoop Common package contains the Java ARchive (JAR) files and scripts needed to start Hadoop.
For effective scheduling of work, every Hadoop-compatible file system should provide location awareness - the name of the rack (or, more precisely, of the network switch) where a worker node is. Hadoop applications can use this information to execute code on the node where the data is, and, failing that, on the same rack/switch to reduce backbone traffic. HDFS uses this method when replicating data for data redundancy across multiple racks. This approach reduces the impact of a rack power outage or switch failure; if any of these hardware failures occurs, the data will remain available.
A small Hadoop cluster includes a single master and multiple worker nodes. The master node consists of a Job Tracker, Task Tracker, NameNode, and DataNode. A slave or worker node acts as both a DataNode and TaskTracker, though it is possible to have data-only and compute-only worker nodes. These are normally used only in nonstandard applications.
In a larger cluster, HDFS nodes are managed through a dedicated NameNode server to host the file system index, and a secondary NameNode that can generate snapshots of the namenode's memory structures, thereby preventing file-system corruption and loss of data. Similarly, a standalone JobTracker server can manage job scheduling across nodes. When Hadoop MapReduce is used with an alternate file system, the NameNode, secondary NameNode, and DataNode architecture of HDFS are replaced by the file-system-specific equivalents.
The HDFS is a distributed, scalable, and portable file system written in Java for the Hadoop framework. Some consider it to instead be a data store due to its lack of POSIX compliance and inability to be mounted, but it does provide shell commands and Java application programming interface (API) methods that are similar to other file systems. A Hadoop cluster has nominally a single namenode plus a cluster of datanodes, although redundancy options are available for the namenode due to its criticality. Each datanode serves up blocks of data over the network using a block protocol specific to HDFS. The file system uses TCP/IP sockets for communication. Clients use remote procedure calls (RPC) to communicate with each other.
HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence theoretically does not require redundant array of independent disks (RAID) storage on hosts (but to increase input-output (I/O) performance some RAID configurations are still useful). With the default replication value, 3, data is stored on three nodes: two on the same rack, and one on a different rack. Data nodes can talk to each other to rebalance data, to move copies around, and to keep the replication of data high. HDFS is not fully POSIX-compliant, because the requirements for a POSIX file-system differ from the target goals of a Hadoop application. The trade-off of not having a fully POSIX-compliant file-system is increased performance for data throughput and support for non-POSIX operations such as Append.
HDFS added the high-availability capabilities, as announced for version 2.0 in May 2012, letting the main metadata server (the NameNode) manually fail-over onto a backup. The project has also started developing automatic fail-overs.
The HDFS file system includes a so-called secondary namenode, a misleading term that some might incorrectly interpret as a backup namenode when the primary namenode goes offline. In fact, the secondary namenode regularly connects with the primary namenode and builds snapshots of the primary namenode's directory information, which the system then saves to local or remote directories. These checkpointed images can be used to restart a failed primary namenode without having to replay the entire journal of file-system actions, then to edit the log to create an up-to-date directory structure. Because the namenode is the single point for storage and management of metadata, it can become a bottleneck for supporting a huge number of files, especially a large number of small files. HDFS Federation, a new addition, aims to tackle this problem to a certain extent by allowing multiple namespaces served by separate namenodes. Moreover, there are some issues in HDFS such as small file issues, scalability problem, Single Point of Failure (SPoF), and bottlenecks in huge metadata requests. One advantage of using HDFS is data awareness between the job tracker and task tracker. The job tracker schedules map or reduce jobs to task trackers with an awareness of the data location. For example: if node A contains data (x, y, z) and node B contains data (a, b, c), the job tracker schedules node B to perform map or reduce tasks on (a, b, c) and node A would be scheduled to perform map or reduce tasks on (x, y, z). This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop is used with other file systems, this advantage is not always available. This can have a significant impact on job-completion times as demonstrated with data-intensive jobs.
HDFS was designed for mostly immutable files and may not be suitable for systems requiring concurrent write-operations.
File access can be achieved through the native Java API, the Thrift API (generates a client in a number of languages e.g. C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml), the command-line interface, the HDFS-UI web application over HTTP, or via 3rd-party network client libraries.
HDFS is designed for portability across various hardware platforms and for compatibility with a variety of underlying operating systems. The HDFS design introduces portability limitations that result in some performance bottlenecks, since the Java implementation cannot use features that are exclusive to the platform on which HDFS is running. Due to its widespread integration into enterprise-level infrastructure, monitoring HDFS performance at scale has become an increasingly important issue. Monitoring end-to-end performance requires tracking metrics from datanodes, namenodes, and the underlying operating system. There are currently several monitoring platforms to track HDFS performance, including HortonWorks, Cloudera, and Datadog.
Hadoop works directly with any distributed file system that can be mounted by the underlying operating system by simply using a
file:// URL; however, this comes at a price - the loss of locality. To reduce network traffic, Hadoop needs to know which servers are closest to the data, information that Hadoop-specific file system bridges can provide.
In May 2011, the list of supported file systems bundled with Apache Hadoop were:
A number of third-party file system bridges have also been written, none of which are currently in Hadoop distributions. However, some commercial distributions of Hadoop ship with an alternative file system as the default - specifically IBM and MapR.
Atop the file systems comes the MapReduce Engine, which consists of one JobTracker, to which client applications submit MapReduce jobs. The JobTracker pushes work to available TaskTracker nodes in the cluster, striving to keep the work as close to the data as possible. With a rack-aware file system, the JobTracker knows which node contains the data, and which other machines are nearby. If the work cannot be hosted on the actual node where the data resides, priority is given to nodes in the same rack. This reduces network traffic on the main backbone network. If a TaskTracker fails or times out, that part of the job is rescheduled. The TaskTracker on each node spawns a separate Java virtual machine (JVM) process to prevent the TaskTracker itself from failing if the running job crashes its JVM. A heartbeat is sent from the TaskTracker to the JobTracker every few minutes to check its status. The Job Tracker and TaskTracker status and information is exposed by Jetty and can be viewed from a web browser.
Known limitations of this approach are:
By default Hadoop uses FIFO scheduling, and optionally 5 scheduling priorities to schedule jobs from a work queue. In version 0.19 the job scheduler was refactored out of the JobTracker, while adding the ability to use an alternate scheduler (such as the Fair scheduler or the Capacity scheduler, described next).
The fair scheduler was developed by Facebook. The goal of the fair scheduler is to provide fast response times for small jobs and Quality of service (QoS) for production jobs. The fair scheduler has three basic concepts.
By default, jobs that are uncategorized go into a default pool. Pools have to specify the minimum number of map slots, reduce slots, as well as a limit on the number of running jobs.
The capacity scheduler was developed by Yahoo. The capacity scheduler supports several features that are similar those of the fair scheduler.
There is no preemption once a job is running.
The HDFS file system is not restricted to MapReduce jobs. It can be used for other applications, many of which are under development at Apache. The list includes the HBase database, the Apache Mahout machine learning system, and the Apache Hive Data Warehouse system. Hadoop can, in theory, be used for any sort of work that is batch-oriented rather than real-time, is very data-intensive, and benefits from parallel processing of data. It can also be used to complement a real-time system, such as lambda architecture, Apache Storm, Flink and Spark Streaming.
As of October 2009 included:-, commercial applications of Hadoop
On February 19, 2008, Yahoo! Inc. launched what they claimed was the world's largest Hadoop production application. The Yahoo! Search Webmap is a Hadoop application that runs on a Linux cluster with more than 10,000 cores and produced data that was used in every Yahoo! web search query. There are multiple Hadoop clusters at Yahoo! and no HDFS file systems or MapReduce jobs are split across multiple data centers. Every Hadoop cluster node bootstraps the Linux image, including the Hadoop distribution. Work that the clusters perform is known to include the index calculations for the Yahoo! search engine. In June 2009, Yahoo! made the source code of its Hadoop version available to the open-source community.
In 2010, Facebook claimed that they had the largest Hadoop cluster in the world with 21 PB of storage. In June 2012, they announced the data had grown to 100 PB and later that year they announced that the data was growing by roughly half a PB per day.
As of 2013, Hadoop adoption had become widespread: more than half of the Fortune 50 used Hadoop.
Hadoop can be deployed in a traditional onsite datacenter as well as in the cloud. The cloud allows organizations to deploy Hadoop without the need to acquire hardware or specific setup expertise. Vendors who currently have an offer for the cloud include Microsoft, Amazon, IBM, Google, Oracle. and CenturyLink Cloud
Azure HDInsight is a service that deploys Hadoop on Microsoft Azure. HDInsight uses Hortonworks HDP and was jointly developed for HDI with Hortonworks. HDI allows programming extensions with .NET (in addition to Java). HDInsight also supports the creation of Hadoop clusters using Linux with Ubuntu. By deploying HDInsight in the cloud, organizations can spin up the number of nodes they want and only get charged for the compute and storage that is used.Hortonworks implementations can also move data from the on-premises datacenter to the cloud for backup, development/test, and bursting scenarios. It is also possible to run Cloudera or Hortonworks Hadoop clusters on Azure Virtual Machines.
It is possible to run Hadoop on Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3). As an example, The New York Times used 100 Amazon EC2 instances and a Hadoop application to process 4 TB of raw image TIFF data (stored in S3) into 11 million finished PDFs in the space of 24 hours at a computation cost of about $240 (not including bandwidth).
There is support for the S3 object store in the Apache Hadoop releases, though this is below what one expects from a traditional POSIX filesystem. Specifically, operations such as rename and delete on directories are not atomic, and can take time proportional to the number of entries and the amount of data in them.
Elastic MapReduce (EMR) was introduced by Amazon.com in April 2009. Provisioning of the Hadoop cluster, running and terminating jobs, and handling data transfer between EC2(VM) and S3(Object Storage) are automated by Elastic MapReduce. Apache Hive, which is built on top of Hadoop for providing data warehouse services, is also offered in Elastic MapReduce. Support for using Spot Instances was later added in August 2011. Elastic MapReduce is fault-tolerant for slave failures, and it is recommended to only run the Task Instance Group on spot instances to take advantage of the lower cost while maintaining availability.
CenturyLink Cloud offers Hadoop via both a managed and un-managed model. CLC also offers customers several managed Cloudera Blueprints, the newest managed service in the CenturyLink Cloud big data portfolio, which also includes Cassandra and MongoDB solutions.
Google also offers connectors for using other Google Cloud Platform products with Hadoop, such as a Google Cloud Storage connector for using Google Cloud Storage and a Google BigQuery connector for using Google BigQuery.
A number of companies offer commercial implementations or support for Hadoop.
The Apache Software Foundation has stated that only software officially released by the Apache Hadoop Project can be called Apache Hadoop or Distributions of Apache Hadoop. The naming of products and derivative works from other vendors and the term "compatible" are somewhat controversial within the Hadoop developer community.
Some papers influenced the birth and growth of Hadoop and big data processing. Some of these are:
The Lucene PMC has voted to split part of Nutch into a new sub-project named Hadoop
Baldeschwieler announced that Yahoo has released a beta test of Hadoop Security, which uses Kerberos for authentication and allows colocation of business sensitive data within the same cluster.
HDFS is not a file system in the traditional sense and isn't usually directly mounted for a user to view
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.