当我们打开系统输入hadoop启动命令start-all.sh时出现以下错误:

[root@master ~]# start-all.shstarting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-master.outmaster: ssh: connect to host master port 22: Network is unreachablemaster: ssh: connect to host master port 22: Network is unreachablestarting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-master.outmaster: ssh: connect to host master port 22: Network is unreachable

我们查看进程:

[root@master ~]# jps2739 Jps[root@master ~]#

显示我们hadoop没有启动。

我的解决方法:

我们先查看我们当前虚拟机系统的ip为192.168.40.128,

[root@master ~]# ifconfigeth1      Link encap:Ethernet  HWaddr 00:0C:29:4E:BC:7A            inet addr:192.168.40.128  Bcast:192.168.40.255  Mask:255.255.255.0          inet6 addr: fe80::20c:29ff:fe4e:bc7a/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:127 errors:0 dropped:0 overruns:0 frame:0          TX packets:134 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:12497 (12.2 KiB)  TX bytes:14291 (13.9 KiB)          Interrupt:19 Base address:0x2024 lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:24 errors:0 dropped:0 overruns:0 frame:0          TX packets:24 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:1616 (1.5 KiB)  TX bytes:1616 (1.5 KiB)

然后我们编辑查看/etc/hosts文件里的配置:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.80.100 master#hadoop的ip配置~                                                                                           ~                                                                                           ~                                                                                                                                                                                                                                                                          "/etc/hosts" 3L, 180C

我们可以看到master,hadoop的ip配置为:192.168.80.100,与当前系统的ip不一致。键盘按i编辑修改ip为当前系统ip(192.168.40.128),按esc输入:wq退出保存。然后重新启动start-all.sh

[root@master ~]# start-all.shstarting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-master.outmaster: Warning: Permanently added the RSA host key for IP address '192.168.40.128' to the list of known hosts.master: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-master.outmaster: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-secondarynamenode-master.outstarting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-master.outmaster: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-master.out

启动成功,查看进程

[root@master ~]# jps3065 SecondaryNameNode3400 Jps3146 JobTracker2951 DataNode2843 NameNode3260 TaskTracker

成功