Get Started
Now we will check how to install stable version of Apache Hadoop on a Laptop running Linux Mint 15 but will work on all Debian based systems including Ubuntu. To start we need to acquire hadoop package and get java installed, to install java, if not already installed follow my install java post. to check which versions of java are supported with hadoop check Hadoop Java Versions. Next step is to acquire hadoop which could be downloaded @ hadoop webpage. we opted for hadoop-1.2.1 in our blog.Create Dedicated Hadoop User
$ sudo addgroup hadoop$ sudo adduser --ingroup hadoop hdpuser
Give user sudo rights
$ sudo nano /etc/sudoersadd this to end of file
hdpuser ALL=(ALL:ALL) ALL
Configuring Secure Shell (SSH)
Communication between master and slave nodes uses SSH, to ensure we have SSH server installedand running SSH deamon.
Installed server with provided command:
$ sudo apt-get install openssh-serverYou can check status of server use this command
$ /etc/init.d/ssh statusTo start ssh server use:
$ /etc/init.d/ssh startNow ssh server is running, we need to set local ssh connection with password. To enable passphraseless ssh use
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
to check ssh
$ ssh localhost$ exit
Disabling IPv6
We need to make sure IPv6 is disabled, it is best to disable IPv6 as all Hadoop communication between nodes is IPv4-based.For this, first access the file /etc/sysctl.conf
$ sudo nano /etc/sysctl.confadd following lines to end
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Save and exit
Reload sysctl for changes to take effect
$ sudo sysctl -p /etc/sysctl.confIf the following command returns 1 (after reboot), it means IPv6 is disabled.
$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6Install Hadoop
Download Version 1.2.1 (Stable Version)Make Hadoop installation directory
$ sudo mkdir -p /usr/hadoopCopy Hadoop installer to installation directory
$ sudo cp -r ~/Downloads/hadoop-1.2.1.tar.gz /usr/hadoopExtract Hadoop installer
$ cd /usr/hadoop$ sudo tar xvzf hadoop-1.2.1.tar.gz
Rename it to hadoop
$ sudo mv hadoop-1.2.1 hadoopChange owner to hdpuser for this folder
$ sudo chown -R hdpuser:hadoop hadoopUpdate .bashrc with Hadoop-related environment variables
$ sudo nano ~/.bashrcAdd following lines at the end:
# Set HADOOP_HOME
export HADOOP_HOME=/usr/hadoop/hadoop
# Set JAVA_HOME
# Import if you have installed java from apt-get
# use /usr instead of /usr/local/java/jdk1.7.0_51
export JAVA_HOME=/usr/local/java/jdk1.7.0_51
# Add Hadoop bin directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
Save & Exit
Reload bashrc
$ source ~/.bashrcUpdate JAVA_HOME in hadoop-env.sh
$ cd /usr/hadoop/hadoop$ sudo nano conf/hadoop-env.sh
Add the line:
export JAVA_HOME=/usr/local/java/jdk1.7.0_51
Save and exit
Create a Directory to hold Hadoop’s Temporary Files:
$ sudo mkdir -p /usr/hadoop/tmpProvide hdpuser the rights to this directory
$ sudo chown hdpuser:hadoop /usr/hadoop/tmpHadoop Configurations
Modify conf/core-site.xml – Core Configuration
$ sudo nano conf/core-site.xml
Add the following lines between configuration tags
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>Hadoop's temporary directory</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>Specifying HDFS as the default file system.</description>
</property>
Add the following lines between configuration tags
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>Hadoop's temporary directory</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>Specifying HDFS as the default file system.</description>
</property>
Modify conf/mapred-site.xml – MapReduce configuration
$ sudo nano conf/mapred-site.xml
Add the following lines between configuration tags
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The URI is used to monitor the status of MapReduce tasks</description>
</property>
Add the following lines between configuration tags
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The URI is used to monitor the status of MapReduce tasks</description>
</property>
Modify conf/hdfs-site.xml – File Replication
$ sudo nano conf/hdfs-site.xml
Add following lines between configuration tags:
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.</description>
</property>
Add following lines between configuration tags:
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.</description>
</property>
Initializing the Single-Node Cluster
Formatting the Name Node:
While setting up the cluster for the first time, we need to initially format the Name Node in HDFS.
$ bin/hadoop namenode -format
$ bin/hadoop namenode -format
Starting all daemons:
$ bin/start-all.sh
You should now be able to browse the nameNode and JobTracker in your browser (after a short delay for startup) by browsing to the following URLs:
nameNode: http://localhost:50070/
JobTracker: http://localhost:50030/
You should now be able to browse the nameNode and JobTracker in your browser (after a short delay for startup) by browsing to the following URLs:
nameNode: http://localhost:50070/
JobTracker: http://localhost:50030/
Stoping all daemons:
$ bin/stop-all.sh
your can seperatly start stop as
hdfs:
$ bin/start-dfs.sh
$ bin/stop-dfs.sh
$ bin/stop-dfs.sh
mappered:
$ bin/start-mapred.sh
$ bin/stop-mapred.sh
Now run examples Java Word Count Example. looking for examples to run without changing your style of code, am going run python map-reduce wait for post.
$ bin/stop-mapred.sh
Now run examples Java Word Count Example. looking for examples to run without changing your style of code, am going run python map-reduce wait for post.
No comments:
Post a Comment