博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
在eclipse中通过local的模式可以正确的调试hadoop2.2
阅读量:4187 次
发布时间:2019-05-26

本文共 20435 字,大约阅读时间需要 68 分钟。

已经在eclipse中通过local的模式可以正确的调试hadoop2.2,那么本篇,散仙将重点叙述下,如何在eclipse中,真真正正的提交作业到yarn上,开启分布式模式的调试,通过在eclipse上调试,hadoop的MapReduce程序,可以使我们学习Hadoop更加容易,清晰。

如果没有看过,散仙的如何在eclipse中使用local模式调试hadoop的文章,可以先看下上篇,熟悉下基本的问题的解决。
下面进入正题,由于散仙在上篇中,已经使用eclipse成功的使用了local模式的调试,所以本次改成分布式模式的调试,也不算太困难。使用eclipse作为客户端像yarn集群上提交作业,需要将整个项目打包成一个jar,散仙在这里使用的是一个ant脚本,文章最后,散仙会附上来,直接遇到的最大的一个问题如下异常:

Java代码  
  1. 2014-06-11 17:32:19,761 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1401177251807_0034_01_000001 and exit code: 1    
  2. org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control    
  3.     
  4.     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)    
  5.     at org.apache.hadoop.util.Shell.run(Shell.java:418)    
  6.     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)    
  7.     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)    
  8.     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)    
  9.     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)    
  10.     at java.util.concurrent.FutureTask.run(FutureTask.java:262)    
  11.     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)    
  12.     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)    
  13.     at java.lang.Thread.run(Thread.java:744)    
2014-06-11 17:32:19,761 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1401177251807_0034_01_000001 and exit code: 1  org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)      at org.apache.hadoop.util.Shell.run(Shell.java:418)      at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)      at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)      at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)      at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)      at java.util.concurrent.FutureTask.run(FutureTask.java:262)      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)      at java.lang.Thread.run(Thread.java:744)

这个问题,在网上已经得到解决,需要下2个patch包,进行打补丁,比较繁琐,散仙,在参考了这位兄弟的文章后,
感觉使用方法解决,比较简洁方便。引起上述异常的主要原因就是,Linux和Windows的环境变量符号不一致导致的问题win上是%而linux上是$所以直接导致了上述原因,当然这个问题再linux上的eclipse是不存在,只有在win上的eclipse中,才会出现,所以我们要做的就是,改变org.apache.hadoop.mapred.YARNRunner类里面的一些方法,来消除此异常。
具体步骤,改写YARNRunner源码中的一些方法(YARNRunner.java源码类在hadoop-mapreduce-client-jobclient的maven项目中的org.apache.hadoop.mapred包下)需要在src下建同样的包名,类名,覆盖原来jar包里面自带的类。
YarnRunner.java的390行 (Apache Hadoop2.2的源码)

Java代码  
  1. // Setup the command to run the AM    
  2.     List<String> vargs = new ArrayList<String>(8);    
  3.     vargs.add(Environment.JAVA_HOME.$() + "/bin/java");    
// Setup the command to run the AM      List
vargs = new ArrayList
(8); vargs.add(Environment.JAVA_HOME.$() + "/bin/java");

改为

Java代码  
  1. vargs.add("$JAVA_HOME/bin/java");    
vargs.add("$JAVA_HOME/bin/java");

在YarnRunner.java类中,新增一个路径转换的方法

Java代码  
  1. private void replaceEnvironment(Map<String, String> environment) {    
  2.       String tmpClassPath = environment.get("CLASSPATH");    
  3.       tmpClassPath=tmpClassPath.replaceAll(";"":");    
  4.       tmpClassPath=tmpClassPath.replaceAll("%PWD%""\\$PWD");    
  5.       tmpClassPath=tmpClassPath.replaceAll("%HADOOP_MAPRED_HOME%""\\$HADOOP_MAPRED_HOME");    
  6.       tmpClassPath= tmpClassPath.replaceAll("\\\\", "/" );    
  7.       environment.put("CLASSPATH",tmpClassPath);    
  8. }    
private void replaceEnvironment(Map
environment) { String tmpClassPath = environment.get("CLASSPATH"); tmpClassPath=tmpClassPath.replaceAll(";", ":"); tmpClassPath=tmpClassPath.replaceAll("%PWD%", "\\$PWD"); tmpClassPath=tmpClassPath.replaceAll("%HADOOP_MAPRED_HOME%", "\\$HADOOP_MAPRED_HOME"); tmpClassPath= tmpClassPath.replaceAll("\\\\", "/" ); environment.put("CLASSPATH",tmpClassPath); }

在YarnRunner.java的在466行添加:

Java代码  
  1. replaceEnvironment(environment);    
replaceEnvironment(environment);

通过,这样设置后,原来的异常就得到解决了,散仙在这里分布式测试的例子依旧是hellow world,源码如下:

Java代码  
  1. package com.qin.wordcount;  
  2.   
  3. import java.io.IOException;  
  4.   
  5. import org.apache.hadoop.conf.Configuration;  
  6. import org.apache.hadoop.fs.FileSystem;  
  7. import org.apache.hadoop.fs.Path;  
  8. import org.apache.hadoop.io.IntWritable;  
  9. import org.apache.hadoop.io.LongWritable;  
  10. import org.apache.hadoop.io.Text;  
  11. import org.apache.hadoop.mapred.JobConf;  
  12. import org.apache.hadoop.mapred.YARNRunner;  
  13. import org.apache.hadoop.mapreduce.Job;  
  14. import org.apache.hadoop.mapreduce.Mapper;  
  15. import org.apache.hadoop.mapreduce.Reducer;  
  16. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
  17. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;  
  18. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
  19. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;  
  20.   
  21. /*** 
  22.  *  
  23.  * Hadoop2.2.0完全分布式测试 
  24.  * 放WordCount的例子 
  25.  *  
  26.  * @author qindongliang 
  27.  *  
  28.  * hadoop技术交流群:  376932160 
  29.  *  
  30.  *  
  31.  * */  
  32. public class MyWordCount {  
  33.       
  34.       
  35.     /** 
  36.      * Mapper 
  37.      *  
  38.      * **/  
  39.     private static class WMapper extends Mapper<LongWritable, Text, Text, IntWritable>{  
  40.           
  41.           
  42.         private IntWritable count=new IntWritable(1);  
  43.         private Text text=new Text();  
  44.         @Override  
  45.         protected void map(LongWritable key, Text value,Context context)  
  46.                 throws IOException, InterruptedException {  
  47.             String values[]=value.toString().split("#");  
  48.             //System.out.println(values[0]+"========"+values[1]);  
  49.             count.set(Integer.parseInt(values[1]));  
  50.             text.set(values[0]);  
  51.             context.write(text,count);  
  52.               
  53.         }  
  54.           
  55.     }  
  56.       
  57.     /** 
  58.      * Reducer 
  59.      *  
  60.      * **/  
  61.     private static class WReducer extends Reducer<Text, IntWritable, Text, Text>{  
  62.           
  63.         private Text t=new Text();  
  64.         @Override  
  65.         protected void reduce(Text key, Iterable<IntWritable> value,Context context)  
  66.                 throws IOException, InterruptedException {  
  67.             int count=0;  
  68.             for(IntWritable i:value){  
  69.                 count+=i.get();  
  70.             }  
  71.             t.set(count+"");  
  72.             context.write(key,t);  
  73.               
  74.         }  
  75.           
  76.     }  
  77.       
  78.       
  79.     /** 
  80.      * 改动一 
  81.      * (1)shell源码里添加checkHadoopHome的路径 
  82.      * (2)974行,FileUtils里面 
  83.      * **/  
  84.       
  85.     public static void main(String[] args) throws Exception{  
  86.           
  87.           
  88.         Configuration conf=new Configuration();  
  89.           
  90.         conf.set("mapreduce.job.jar""myjob.jar");  
  91.         conf.set("fs.defaultFS","hdfs://192.168.46.28:9000");  
  92.         conf.set("mapreduce.framework.name""yarn");    
  93.         conf.set("yarn.resourcemanager.address""192.168.46.28:8032");   
  94.         /**Job任务**/  
  95.        //Job job=new Job(conf, "testwordcount");//废弃此API  
  96.        Job job=Job.getInstance(conf, "new api");  
  97.         job.setJarByClass(MyWordCount.class);  
  98.         System.out.println("模式:  "+conf.get("mapreduce.jobtracker.address"));;  
  99.         // job.setCombinerClass(PCombine.class);  
  100.       
  101.            
  102.            
  103.         // job.setNumReduceTasks(3);//设置为3  
  104.          job.setMapperClass(WMapper.class);  
  105.          job.setReducerClass(WReducer.class);  
  106.          job.setInputFormatClass(TextInputFormat.class);  
  107.          job.setOutputFormatClass(TextOutputFormat.class);  
  108.    
  109.        
  110.           
  111.          job.setMapOutputKeyClass(Text.class);  
  112.          job.setMapOutputValueClass(IntWritable.class);  
  113.          job.setOutputKeyClass(Text.class);  
  114.          job.setOutputValueClass(Text.class);  
  115.       
  116.             String path="hdfs://192.168.46.28:9000/qin/output";  
  117.             FileSystem fs=FileSystem.get(conf);  
  118.             Path p=new Path(path);  
  119.             if(fs.exists(p)){  
  120.                 fs.delete(p, true);  
  121.                 System.out.println("输出路径存在,已删除!");  
  122.             }  
  123.         FileInputFormat.setInputPaths(job, "hdfs://192.168.46.28:9000/qin/input");  
  124.         FileOutputFormat.setOutputPath(job,p );  
  125.         System.exit(job.waitForCompletion(true) ? 0 : 1);    
  126.           
  127.           
  128.           
  129.           
  130.     }  
  131.       
  132.   
  133. }  
package com.qin.wordcount;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapred.YARNRunner;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;/*** *  * Hadoop2.2.0完全分布式测试 * 放WordCount的例子 *  * @author qindongliang *  * hadoop技术交流群:  376932160 *  *  * */public class MyWordCount {			/**	 * Mapper	 * 	 * **/	private static class WMapper extends Mapper
{ private IntWritable count=new IntWritable(1); private Text text=new Text(); @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { String values[]=value.toString().split("#"); //System.out.println(values[0]+"========"+values[1]); count.set(Integer.parseInt(values[1])); text.set(values[0]); context.write(text,count); } } /** * Reducer * * **/ private static class WReducer extends Reducer
{ private Text t=new Text(); @Override protected void reduce(Text key, Iterable
value,Context context) throws IOException, InterruptedException { int count=0; for(IntWritable i:value){ count+=i.get(); } t.set(count+""); context.write(key,t); } } /** * 改动一 * (1)shell源码里添加checkHadoopHome的路径 * (2)974行,FileUtils里面 * **/ public static void main(String[] args) throws Exception{ Configuration conf=new Configuration(); conf.set("mapreduce.job.jar", "myjob.jar"); conf.set("fs.defaultFS","hdfs://192.168.46.28:9000"); conf.set("mapreduce.framework.name", "yarn"); conf.set("yarn.resourcemanager.address", "192.168.46.28:8032"); /**Job任务**/ //Job job=new Job(conf, "testwordcount");//废弃此API Job job=Job.getInstance(conf, "new api"); job.setJarByClass(MyWordCount.class); System.out.println("模式: "+conf.get("mapreduce.jobtracker.address"));; // job.setCombinerClass(PCombine.class); // job.setNumReduceTasks(3);//设置为3 job.setMapperClass(WMapper.class); job.setReducerClass(WReducer.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); String path="hdfs://192.168.46.28:9000/qin/output"; FileSystem fs=FileSystem.get(conf); Path p=new Path(path); if(fs.exists(p)){ fs.delete(p, true); System.out.println("输出路径存在,已删除!"); } FileInputFormat.setInputPaths(job, "hdfs://192.168.46.28:9000/qin/input"); FileOutputFormat.setOutputPath(job,p ); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

在运行的时候,需要注意把,hadoop集群上的配置文件core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml文件拷贝到src的根目录下,最好也放一个log4j.xml方便查看日志。并在mapred-site.xml里面,添加如下属性:

Xml代码  
  1.  <name>mapred.remote.os</name>   
  2.   
  3.  <value>Linux</value>   
  4.   
  5. <description>RemoteMapReduce framework's OS, can be either Linux orWindows</description>   
  6.   
  7.  </property>  
mapred.remote.os
Linux
RemoteMapReduce framework's OS, can be either Linux orWindows

然后,把项目打成jar包,运行提交作业,散仙的控制台打印内容如下:

Java代码  
  1. 模式:  hp1:8021  
  2. 输出路径存在,已删除!  
  3. INFO - RMProxy.createRMProxy(56) | Connecting to ResourceManager at /192.168.46.28:8032  
  4. WARN - JobSubmitter.copyAndConfigureFiles(149) | Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.  
  5. INFO - FileInputFormat.listStatus(287) | Total input paths to process : 1  
  6. INFO - JobSubmitter.submitJobInternal(394) | number of splits:1  
  7. INFO - Configuration.warnOnceIfDeprecated(840) | user.name is deprecated. Instead, use mapreduce.job.user.name  
  8. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.jar is deprecated. Instead, use mapreduce.job.jar  
  9. INFO - Configuration.warnOnceIfDeprecated(840) | fs.default.name is deprecated. Instead, use fs.defaultFS  
  10. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class  
  11. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class  
  12. INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class  
  13. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.job.name is deprecated. Instead, use mapreduce.job.name  
  14. INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class  
  15. INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class  
  16. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir  
  17. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir  
  18. INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class  
  19. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps  
  20. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class  
  21. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class  
  22. INFO - Configuration.warnOnceIfDeprecated(840) | mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir  
  23. INFO - JobSubmitter.printTokens(477) | Submitting tokens for job: job_1402492118962_0004  
  24. INFO - YarnClientImpl.submitApplication(174) | Submitted application application_1402492118962_0004 to ResourceManager at /192.168.46.28:8032  
  25. INFO - Job.submit(1272) | The url to track the job: http://hp1:8088/proxy/application_1402492118962_0004/  
  26. INFO - Job.monitorAndPrintJob(1317) | Running job: job_1402492118962_0004  
  27. INFO - Job.monitorAndPrintJob(1338) | Job job_1402492118962_0004 running in uber mode : false  
  28. INFO - Job.monitorAndPrintJob(1345) |  map 0% reduce 0%  
  29. INFO - Job.monitorAndPrintJob(1345) |  map 100% reduce 0%  
  30. INFO - Job.monitorAndPrintJob(1345) |  map 100% reduce 100%  
  31. INFO - Job.monitorAndPrintJob(1356) | Job job_1402492118962_0004 completed successfully  
  32. INFO - Job.monitorAndPrintJob(1363) | Counters: 43  
  33.     File System Counters  
  34.         FILE: Number of bytes read=58  
  35.         FILE: Number of bytes written=159667  
  36.         FILE: Number of read operations=0  
  37.         FILE: Number of large read operations=0  
  38.         FILE: Number of write operations=0  
  39.         HDFS: Number of bytes read=147  
  40.         HDFS: Number of bytes written=27  
  41.         HDFS: Number of read operations=6  
  42.         HDFS: Number of large read operations=0  
  43.         HDFS: Number of write operations=2  
  44.     Job Counters   
  45.         Launched map tasks=1  
  46.         Launched reduce tasks=1  
  47.         Data-local map tasks=1  
  48.         Total time spent by all maps in occupied slots (ms)=6155  
  49.         Total time spent by all reduces in occupied slots (ms)=4929  
  50.     Map-Reduce Framework  
  51.         Map input records=4  
  52.         Map output records=4  
  53.         Map output bytes=44  
  54.         Map output materialized bytes=58  
  55.         Input split bytes=109  
  56.         Combine input records=0  
  57.         Combine output records=0  
  58.         Reduce input groups=3  
  59.         Reduce shuffle bytes=58  
  60.         Reduce input records=4  
  61.         Reduce output records=3  
  62.         Spilled Records=8  
  63.         Shuffled Maps =1  
  64.         Failed Shuffles=0  
  65.         Merged Map outputs=1  
  66.         GC time elapsed (ms)=99  
  67.         CPU time spent (ms)=1060  
  68.         Physical memory (bytes) snapshot=309071872  
  69.         Virtual memory (bytes) snapshot=1680531456  
  70.         Total committed heap usage (bytes)=136450048  
  71.     Shuffle Errors  
  72.         BAD_ID=0  
  73.         CONNECTION=0  
  74.         IO_ERROR=0  
  75.         WRONG_LENGTH=0  
  76.         WRONG_MAP=0  
  77.         WRONG_REDUCE=0  
  78.     File Input Format Counters   
  79.         Bytes Read=38  
  80.     File Output Format Counters   
  81.         Bytes Written=27  
模式:  hp1:8021输出路径存在,已删除!INFO - RMProxy.createRMProxy(56) | Connecting to ResourceManager at /192.168.46.28:8032WARN - JobSubmitter.copyAndConfigureFiles(149) | Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.INFO - FileInputFormat.listStatus(287) | Total input paths to process : 1INFO - JobSubmitter.submitJobInternal(394) | number of splits:1INFO - Configuration.warnOnceIfDeprecated(840) | user.name is deprecated. Instead, use mapreduce.job.user.nameINFO - Configuration.warnOnceIfDeprecated(840) | mapred.jar is deprecated. Instead, use mapreduce.job.jarINFO - Configuration.warnOnceIfDeprecated(840) | fs.default.name is deprecated. Instead, use fs.defaultFSINFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.classINFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.job.name is deprecated. Instead, use mapreduce.job.nameINFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.classINFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdirINFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdirINFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.map.tasks is deprecated. Instead, use mapreduce.job.mapsINFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.classINFO - Configuration.warnOnceIfDeprecated(840) | mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dirINFO - JobSubmitter.printTokens(477) | Submitting tokens for job: job_1402492118962_0004INFO - YarnClientImpl.submitApplication(174) | Submitted application application_1402492118962_0004 to ResourceManager at /192.168.46.28:8032INFO - Job.submit(1272) | The url to track the job: http://hp1:8088/proxy/application_1402492118962_0004/INFO - Job.monitorAndPrintJob(1317) | Running job: job_1402492118962_0004INFO - Job.monitorAndPrintJob(1338) | Job job_1402492118962_0004 running in uber mode : falseINFO - Job.monitorAndPrintJob(1345) |  map 0% reduce 0%INFO - Job.monitorAndPrintJob(1345) |  map 100% reduce 0%INFO - Job.monitorAndPrintJob(1345) |  map 100% reduce 100%INFO - Job.monitorAndPrintJob(1356) | Job job_1402492118962_0004 completed successfullyINFO - Job.monitorAndPrintJob(1363) | Counters: 43	File System Counters		FILE: Number of bytes read=58		FILE: Number of bytes written=159667		FILE: Number of read operations=0		FILE: Number of large read operations=0		FILE: Number of write operations=0		HDFS: Number of bytes read=147		HDFS: Number of bytes written=27		HDFS: Number of read operations=6		HDFS: Number of large read operations=0		HDFS: Number of write operations=2	Job Counters 		Launched map tasks=1		Launched reduce tasks=1		Data-local map tasks=1		Total time spent by all maps in occupied slots (ms)=6155		Total time spent by all reduces in occupied slots (ms)=4929	Map-Reduce Framework		Map input records=4		Map output records=4		Map output bytes=44		Map output materialized bytes=58		Input split bytes=109		Combine input records=0		Combine output records=0		Reduce input groups=3		Reduce shuffle bytes=58		Reduce input records=4		Reduce output records=3		Spilled Records=8		Shuffled Maps =1		Failed Shuffles=0		Merged Map outputs=1		GC time elapsed (ms)=99		CPU time spent (ms)=1060		Physical memory (bytes) snapshot=309071872		Virtual memory (bytes) snapshot=1680531456		Total committed heap usage (bytes)=136450048	Shuffle Errors		BAD_ID=0		CONNECTION=0		IO_ERROR=0		WRONG_LENGTH=0		WRONG_MAP=0		WRONG_REDUCE=0	File Input Format Counters 		Bytes Read=38	File Output Format Counters 		Bytes Written=27

作业在8088界面上显示情况如下:
wordcount的执行结果,也正确,至此,我们的eclipse调试hadoop2.2分布式集群,已经成功了

转载地址:http://otjoi.baihongyu.com/

你可能感兴趣的文章
使括号有效的最少添加
查看>>
令牌放置
查看>>
回溯法思想
查看>>
子集和问题
查看>>
旅行售货员问题
查看>>
区域和检索 - 数组不可变
查看>>
整数分解
查看>>
最长有效括号
查看>>
救生艇
查看>>
Android中自定义圆形图片(一)
查看>>
Android中ViewPager自动加手动轮播
查看>>
Android中Fragment点击切换与添加ViewPager滑动切换
查看>>
Java多线程-阻塞队列BlockingQueue
查看>>
Windows:Apache与Tomcat集群调优
查看>>
Apache+2Tomcat 集群及调优
查看>>
通向架构师的道路(第三天)之apache性能调优
查看>>
Tomcat性能调优
查看>>
Tomcat集群
查看>>
quartz在集群环境下的最终解决方案
查看>>
ERwin Data Modeler 建模实践
查看>>