资源下载 1.下载Hadoop-2.5.2.tar.gzhttp://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz 2.下载hadoop2.5.2-eclipse-pluginhttp://download.csdn.net/detail/lzm1340458776/8519515 3.下载hadoop-common-2.2.0-binhttps://git
资源下载
1.下载Hadoop-2.5.2.tar.gz http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz
2.下载hadoop2.5.2-eclipse-plugin http://download.csdn.net/detail/lzm1340458776/8519515
3.下载hadoop-common-2.2.0-bin https://github.com/srccodes/hadoop-common-2.2.0-bin
开始安装
1.解压hadoop-2.5.2.tar.gz到win7下某个路径。
2.配置本地hadoop的环境
3.window-->Preferences--->Hadoop Map/Reduce 关联步骤1中解压的hadoop路径
4.Window--->show view选择Map/Reduce Locations
5.配置连接参数
注:Host可以填写IP地址,也可以填写对应的主机名(要在C盘hosts文件配置对应的主机名),MapReduce的Port可以默认,HDFS的Port为core-site.xml文件中配置的端口。
6.查看文件系统
相关问题:
7.编写WordCount程序,运行可能会出现如下错误:
java.lang.NullPointerException at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:482) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702) at org.apache.hadoop.util.Shell.execCommand(Shell.java:791) at org.apache.hadoop.util.Shell.execCommand(Shell.java:774) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:646) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at com.lixue.mapreduce.chain.WordCountMapReduce.main(WordCountMapReduce.java:70)
解决方案:
解压hadoop-common-2.2.0-bin,将解压后的bin目录覆盖hadoop解压后目录中的bin文件夹。
此时运行MapReduce程序,又会发现控制台只输出了如下信息:
log4j:WARN No appenders could be found for logger (com.hp.hpl.jena.util.FileManager). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
把hadoop-2.5.2/etc/hadoop/log4j.properties文件拷贝到项目的src目录下。
再次运行MapReduce程序,又出现了一个问题:
java.lang.RuntimeException: java.lang.InstantiationException at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:490) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at com.lixue.mapreduce.chain.WordCountMapReduce.main(WordCountMapReduce.java:72) Caused by: java.lang.InstantiationException at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129) ... 11 more
把驱动类中的:
job.setInputFormatClass(FileInputFormat.class);改成:
job.setInputFormatClass(TextInputFormat.class);