Monday, January 21, 2019

Hadoop Single Node Cluster Setup On Ubuntu 14.04



Single Node Cluster
Ubuntu 14.04
AWS EC2






Once you logged in AWS click on EC2

To start using Amazon EC2 you will want to launch a virtual server, known as an Amazon EC2 instance.



Now Sear search for ubuntu server 14.04 LTS

                   connecting to your AWS instance  


cd Downloads/

chmod 400 Your_Key.pem

ssh -i "Your_Key.pem" ubuntu@ec2-127-127-127-127.compute-1.amazonaws.com

                     update ubuntu repository 


sudo apt-get update


                              install java in ubuntu



sudo apt-get install openjdk-7-jdk -y


                create key for client server relationship



ssh-keygen



cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

download hadoop for ubuntu 14.04

https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

http://archive.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

wget http://archive.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

tar -xzvf hadoop-1.2.1.tar.gz



sudo mv hadoop-1.2.1 /usr/local/hadoop


configure bash file

nano ~/.bashrc

export HADOOP_PREFIX=/usr/local/hadoop/
export PATH=$PATH:$HADOOP_PREFIX/bin

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 
export PATH=$PATH:$JAVA_HOME 

Setup Configuration Files

The following files will have to be modified to complete the Hadoop setup:

/usr/local/hadoop/conf/hadoop-env.sh /usr/local/hadoop/conf/core-site.xml
/usr/local/hadoop/conf/mapred-site.xml
/usr/local/hadoop/conf/hdfs-site.xml

hadoop-env.sh


export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 
export HADOOP_OPTS=-Djava.net.preferIPV4Stack=true 

core-site.xml


<property> 
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value> 
</property>

<property> 
       <name>hadoop.tmp.dir</name> 
       <value>/usr/local/hadoop/tmp</value> 
</property> 

mapred-site.xml

<property> 
       <name>mapred.job.tracker</name> 
       <value>hdfs://localhost:9001</value>
</property> 

hdfs-site.xml


<property>
     <name>dfs.replication</name> 
     <value>1</value>
</property> 


final step 

mkdir /usr/local/hadoop/tmp 

hadoop namenode -format 




start-all.sh

jps





1 comment:

  1. amazing work bro but it would be great if you could upload or work on security in depth....this is childs play for all

    ReplyDelete