First step: add the disk, make the partition, mount to the machine
- connect HDD to machine
ls /dev/[sh]d*
The one not ending with a number should be the new HDD, e.g. /dev/sdb- create partition on the new disk (assume create one partition on the entire disk)
fdisk /dev/sdb => command n add a new partition => select primary => default partition number 1 => default start cylinder => default end cylinder => command w write table to disk and exit => command q leave fdisk
- list partitions on the new disk
fdisk -l /dev/sdb
you should see /dev/sdb1 - format the partition
mkfs -t ext4 /dev/sdb1
- get the UUID of disk
blkid
copy the UUID, assume it's a-b-c-d-e - mount the partition, assume we're trying to mount on /data2add the line to file </etc/fstab>
UUID=a-b-c-d-e /data2 ext4 defaults 0
- reboot
- mount /data2
Step 2: change the attributes of the folder
This step is very important!
chgrp hadoop /data2 chown hdfs /data2
Step 3: add this folder to datanode setting by CDH Manager
Step 4: restart HDFS
check out the space now by namenode's status page: http://xxxx:50070/dfshealth.jsp
Great post. Very useful. Thanks.
ReplyDelete