Monday 3 March 2014

Running your First Example On hadoop using python


Overview

Even though the Hadoop framework is written in Java, but we can use other languages like python and C++, to write MapReduce for Hadoop. However, Hadoop’s documentation suggest that your must translate your code to java jar file using jython. which is not very convenient and can even be problematic if you depend on Python features not provided by Jython.

Example

We will write simple WordCount MapReduce program using pure python. input is text files and output is file with words and thier count. you can use other languages like perl.

Prerequisites

You should have hadoop cluster running if still not have cluster ready Try this to start with single node cluster.

MapReduce

Idea behind python code is that we will use hadoop streaming API to transfer data/Result between our Map and Reduce code using STDIN(sys.stdin)/ STDOUT(sys.stdout). We will use STDIN to read data
from input and print output to STDOUT.

mapper.py


import sys
for line in sys.stdin:
    line = line.strip()
    words = line.split()
    for word in words:
        print '%s\t%s' % (word, 1)


reducer.py

from operator import itemgetter

import sys


current_word = None

current_count = 0

word = None


for line in sys.stdin:

    line = line.strip()

    word, count = line.split('\t', 1)

    try:

        count = int(count)

    except ValueError:

        continue

    if current_word == word:

        current_count += count

    else:

        if current_word:

            print '%s\t%s' % (current_word, current_count)

        current_count = count

        current_word = word

if current_word == word:

    print '%s\t%s' % (current_word, current_count)


Running Hadoop's Job

Download Example Data to home directory like /home/elite/Downloads/examples/
Book1
Book2
Book3



Start Cluster

$ bin/start-all.sh

Copy Data from Local to dfs File System
$ bin/hadoop dfs -mkdir /wordscount
$ bin/hadoop dfs -copyFromLocal /home/elite/Downloads/examples/ /home/hdpuser/wordscount/

Here we have created directory in hadoop file system named wordcount and moved our local directory containing our test data to hadoop hdfs. We can check if files have been copied properly to hadoop directory by listing its content as presented below.

Check files on dfs
$ bin/hadoop dfs -ls /home/hdpuser/wordscount

Run MapReduce Job

I have both mapper.py and reducer.py and /home/hdpuser/ here is command to run job.
$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar \
-file /home/hduser/mapper.py -mapper /home/hduser/mapper.py \
-file /home/hduser/reducer.py -reducer /home/hduser/reducer.py \
-input /home/hdpuser/wordscount/* -output /home/hdpuser/wordscount.out

You Can check status from terminal or web page http://localhost:50030/ configured in your cluster setup. after job is complete we can get results back by coping output file from hadoop file system to local

$ bin/hadoop dfs -copyToLocal /home/hdpuser/wordscount.out /home/hdpuser/

Check Result

$ vi /home/hdpuser/wordscount.out/part-00000

Stop running cluster

$ bin/stop-all.sh

Sunday 2 March 2014

Installing Hadoop Single Node - 1.2.1

Get Started

Now we will check how to install stable version of Apache Hadoop on a Laptop running Linux Mint 15 but will work on all Debian based systems including Ubuntu. To start we need to acquire hadoop package and get java installed, to install java, if not already installed follow my install java post. to check which versions of java are supported with hadoop check Hadoop Java Versions. Next step is to acquire hadoop which could be downloaded @ hadoop webpage. we opted for hadoop-1.2.1 in our blog.

Create Dedicated Hadoop User

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hdpuser

Give user sudo rights

$ sudo nano /etc/sudoers
add this to end of file
hdpuser ALL=(ALL:ALL) ALL

Configuring Secure Shell (SSH)   

Communication between master and slave nodes uses SSH, to ensure we have SSH server installed
and running SSH deamon.

Installed server with provided command:

$ sudo apt-get install openssh-server

You can check status of server use this command

$ /etc/init.d/ssh status

To start ssh server use:

$ /etc/init.d/ssh start

Now ssh server is running, we need to set local ssh connection with password. To enable passphraseless ssh use

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

to check ssh

$ ssh localhost
$ exit

Disabling IPv6

We need to make sure IPv6 is disabled, it is best to disable IPv6 as all Hadoop communication between nodes is IPv4-based.

For this, first access the file /etc/sysctl.conf

$ sudo nano /etc/sysctl.conf
add following lines to end
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Save and exit

Reload sysctl for changes to take effect

$ sudo sysctl -p /etc/sysctl.conf

If the following command returns 1 (after reboot), it means IPv6 is disabled.

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

Install Hadoop

Download Version 1.2.1 (Stable Version)

Make Hadoop installation directory

$ sudo mkdir -p /usr/hadoop

Copy Hadoop installer to installation directory

$ sudo cp -r ~/Downloads/hadoop-1.2.1.tar.gz /usr/hadoop

Extract Hadoop installer

$ cd /usr/hadoop
$ sudo tar xvzf hadoop-1.2.1.tar.gz

Rename it to hadoop

$ sudo mv hadoop-1.2.1 hadoop

Change owner to hdpuser for this folder

$ sudo chown -R hdpuser:hadoop hadoop

Update .bashrc with Hadoop-related environment variables

$ sudo nano ~/.bashrc
Add following lines at the end:
# Set HADOOP_HOME
export HADOOP_HOME=/usr/hadoop/hadoop
# Set JAVA_HOME
# Import if you have installed java from apt-get
# use /usr instead of /usr/local/java/jdk1.7.0_51
export JAVA_HOME=/usr/local/java/jdk1.7.0_51
# Add Hadoop bin directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

Save & Exit

Reload bashrc

$ source ~/.bashrc


Update JAVA_HOME in hadoop-env.sh

$ cd /usr/hadoop/hadoop
$ sudo nano conf/hadoop-env.sh

Add the line:
export JAVA_HOME=/usr/local/java/jdk1.7.0_51

Save and exit

Create a Directory to hold Hadoop’s Temporary Files:

$ sudo mkdir -p /usr/hadoop/tmp

Provide hdpuser the rights to this directory

$ sudo chown hdpuser:hadoop /usr/hadoop/tmp


Hadoop Configurations

Modify conf/core-site.xml – Core Configuration

$ sudo nano conf/core-site.xml

Add the following lines between configuration tags
<property>
   <name>hadoop.tmp.dir</name>
   <value>/usr/hadoop/tmp</value>
   <description>Hadoop's temporary directory</description>
</property>
<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:54310</value>
   <description>Specifying HDFS as the default file system.</description>
</property>

Modify conf/mapred-site.xml – MapReduce configuration

$ sudo nano conf/mapred-site.xml

Add the following lines between configuration tags
<property>
   <name>mapred.job.tracker</name>
   <value>localhost:54311</value>
   <description>The URI is used to monitor the status of MapReduce tasks</description>
</property>

Modify conf/hdfs-site.xml – File Replication

$ sudo nano conf/hdfs-site.xml

Add following lines between configuration tags:
<property>
   <name>dfs.replication</name>
   <value>1</value>
   <description>Default block replication.</description>
</property>

Initializing the Single-Node Cluster


Formatting the Name Node:

While setting up the cluster for the first time, we need to initially format the Name Node in HDFS.
$ bin/hadoop namenode -format

Starting all daemons:

$ bin/start-all.sh

You should now be able to browse the nameNode and JobTracker in your browser (after a short delay for startup) by browsing to the following URLs:

nameNode: http://localhost:50070/
JobTracker: http://localhost:50030/

Stoping all daemons:

$ bin/stop-all.sh

your can seperatly start stop as

hdfs:

$ bin/start-dfs.sh
$ bin/stop-dfs.sh

mappered:

$ bin/start-mapred.sh
$ bin/stop-mapred.sh


Now run examples Java Word Count Example.  looking for examples to run without changing your style of code, am going run python map-reduce wait for post.

Saturday 1 March 2014

Installing Sun-Java JDK 7

Install Java JDK

OS- MINT15, will work on on all Debian Based systems including ubuntu

Easy Way:

Simple and easy way to install JDK is to do it with apt-get repository. but noted that this some ti PPA becomes out dated. 
This installs JDK 7 (which includes Java JDK, JRE and the Java browser plugin).

Remove any installed version of open-JDK
sudo apt-get purge openjdk-\*

Add PPA and update apt-get repo

$ sudo apt-get update
$ sudo apt-get install python-software-properties
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update

install it
$ sudo apt-get install oracle-java7-installer

Check version and process id
Check your Java version to ensure installations and settings:
$ java -version
Verify that JPS (JVM Process Status tool) is up and running
$ jps

Manual way:

1) Remove any previous OpenJDK installations
$ sudo apt-get purge openjdk-\*

2) Make directory to hold Sun Java
$ sudo mkdir -p /usr/local/java

3) Download Oracle Java Sun (JDK/JRE) from Oracle’s website:

JDK Download and JRE Download. Normally downloaded files will be

placed in /home/<your_user_name>/Downloads folder.

4) Copy the downloaded files to the Java directory
$ cd /home/<your_user_name>/Downloads
$ sudo cp -r jdk-7u51-linux-x64.tar.gz /usr/local/java
$ sudo cp -r jre-7u51-linux-x64.tar.gz /usr/local/java

5) Unpack the compressed binaries
$ cd /usr/local/java
$ sudo tar xvzf jdk-7u51-linux-x64.tar.gz
$ sudo tar xvzf jre-7u51-linux-x64.tar.gz

6) Cross-check the extracted binaries:
$ ls -a
The following two folders should be created: jdk1.7.0_51 and jre1.7.0_51

7) To provide information about JDK/JRE paths to the system PATH

(located in /etc/profile), first access the PATH:
$sudo nano /etc/profile

and add the following lines at the end:
JAVA_HOME=/usr/local/java/jdk1.7.0_51
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
JRE_HOME=/usr/local/java/jre1.7.0_51
PATH=$PATH:$HOME/bin:$JRE_HOME/bin
export JAVA_HOME
export JRE_HOME
export PATH

Save and exit (CTRL+O then Enter, then press CTRL+X then Enter)

8) Inform OS about Oracle Sun Java location to signal that it is ready for use:

JDK is available:
$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_51/bin/javac" 1

JRE is available:
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jre1.7.0_51/bin/java" 1

Java Web Start is available:
$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/local/java/jre1.7.0_51/bin/javaws" 1

9) Make Oracle Sun JDK/JRE the default on your system:
Set JRE:
$ sudo update-alternatives --set java /usr/local/java/jre1.7.0_51/bin/java

Set javac Compiler:
$ sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_51/bin/javac

Set Java Web Start:
$ sudo update-alternatives --set javaws /usr/local/java/jre1.7.0_51/bin/javaws

10) Re-load the /etc/profile
$ source /etc/profile

11) Check your Java version to ensure installations and settings:
$ java -version

12) Verify that JPS (JVM Process Status tool) is up and running
$ jps

This will show the process id of the jps process








Big Data and Analytics - Hadoop

What is Big Data?

Big data is buzzword to describe massive volume of structured or unstructured data. Data is too large and complex and impractical to manage with traditional software tools. Now enterprises have data that is too large, move too fast to exceed current data processing capacities. example could be petabytes or exabytes. billions to trillions of records. Big data is not only about too large data as described,

"Big Data Refer to technologies and initiatives that involve data that is too diverse, fast-changing or massive for conventional technologies, skills and infra-structure to address efficiently. Said differently, the volume, velocity or variety of data is too great." - Mongodb

Today's technologies have made it possible to evaluate Big data and realize value from it. retailers can track user web clicks to identify behavioral trends to improve campaigns. Big Data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, velocity, and variety:

Volume: normal computers have storage from 250 gigabytes to 1 terabytes of storage. Today Facebook ingests 500 terabytes of new data every day.
Velocity: to capture ad impressions or user web clicks require millions of events per second.

Variety: Big Data is not only about numbers, dates, strings but is also geospatial data, 3D data, audio and video etc.

Big Data Analytic?

As described refer to process of collecting, organizing and analyzing large sets of data to discover patterns and other useful information. Not only it helps to understand information within data, but will help to identify data that is most important to the business and future business decisions. Big Data analysts basically want the knowledge that comes from analyzing the data.

Hadoop?

Hadoop is a software technology designed to store and process large volumes of data using a cluster of commodity servers and storage. it's an open-source Apache project originated in 2005 by Yahoo. It consists of a distributed file system, called HDFS, and a data processing and execution model called MapReduce. wait and visit next post to install & configure it, then practice MApReduce?