例子为100W 条数据 取出前十个最值(纯本人看完课程后的手写,没有参考网上,结果应该没问题的,也没找到
标准答案写法。。)
首先,由于值都是double,默认的排序方式是升序,这里面我们取得是降序,所以
自定义hadoop对象,并实现WritableComparable
接口,然后覆盖compareTo方法。
class="java" name="code">
class MySuperKey implements WritableComparable<MySuperKey>{
Long mykey;
public MySuperKey(){
}
public MySuperKey(long mykey){
this.mykey=mykey;
}
@Override
public void readFields(DataInput in) throws IOException {
this.mykey=in.readLong();
}
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(this.mykey);
}
@Override
public int hashCode() {
// TODO Auto-generated method stub
return this.mykey.hashCode();
}
@Override
public boolean equals(Object obj) {
if(! (obj instanceof LongWritable)){
return false;
}
LongWritable lvv=(LongWritable)obj;
return (this.mykey==lvv.get());
}
@Override
public int compareTo(MySuperKey o) {
return ((int)(o.mykey-this.mykey));
}
}
由于在map函数中执行后 需要对相同的key值进行分组,但对于自己创建的对象,无法判断是否是相同的,hadoop基础类型是可以的,此时,需要实现RawComparator接口,并覆盖compare方法,并在job执行的时候,加上
job.setGroupingComparatorClass(MyGroupingComparator.class);
下面是自定义的分组对象
class MyGroupingComparator implements RawComparator<MySuperKey>{
@Override
public int compare(MySuperKey o1, MySuperKey o2) {
return (int)(o1.mykey-o2.mykey);
}
@Override
public int compare(byte[] arg0, int arg1, int arg2, byte[] arg3, int arg4,
int arg5) {
return WritableComparator.compareBytes(arg0, arg1, 8, arg3, arg4, 8);
}
}
下面覆盖map和reduce方法
class mysuperMap extends Mapper<LongWritable,Text,MySuperKey,NullWritable>{
protected void map(LongWritable key, Text value, org.apache.hadoop.mapreduce.Mapper<LongWritable,Text,MySuperKey,NullWritable>.Context context) throws IOException ,InterruptedException {
long sdsd=Long.parseLong(value.toString());
MySuperKey my=new MySuperKey(sdsd);
context.write(my,NullWritable.get());
};
}
class mysupderreduace extends Reducer<MySuperKey, NullWritable, LongWritable, NullWritable>{
int i=0;
protected void reduce(MySuperKey key, java.lang.Iterable<NullWritable> value, org.apache.hadoop.mapreduce.Reducer<MySuperKey,NullWritable,LongWritable,NullWritable>.Context arg2) throws IOException ,InterruptedException {
i=i+1;
if(i<11){
arg2.write(new LongWritable(key.mykey), NullWritable.get());
}
};
}
下面写main函数 ,执行job
public static void main(String[] args) throws Exception {
final String INPUT_PATHs = "hdfs://chaoren:9000/seq100w.txt";
final String OUT_PATHs = "hdfs://chaoren:9000/out";
Job job=new Job(new Configuration(),MySuper.class.getSimpleName());
FileInputFormat.setInputPaths(job, INPUT_PATHs);
job.setInputFormatClass(TextInputFormat.class);
job.setMapperClass(mysuperMap.class);
job.setMapOutputKeyClass(MySuperKey.class);
job.setMapOutputValueClass(NullWritable.class);
//1.3 指定分区类
job.setPartitionerClass(HashPartitioner.class);
job.setNumReduceTasks(1);
//指定分组
job.setGroupingComparatorClass(MyGroupingComparator.class);
job.setReducerClass(mysupderreduace.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(NullWritable.class);
FileOutputFormat.setOutputPath(job,new Path(OUT_PATHs));
job.setOutputFormatClass(TextOutputFormat.class);
job.waitForCompletion(true);
}