public class SleepJob extends org.apache.hadoop.conf.Configured implements org.apache.hadoop.util.Tool, Mapper<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable>, Reducer<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable>, Partitioner<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable>
numMappers * mapSleepTime / 100
, so the job uses
some disk space.Modifier and Type | Class and Description |
---|---|
static class |
SleepJob.EmptySplit |
static class |
SleepJob.SleepInputFormat |
Constructor and Description |
---|
SleepJob() |
Modifier and Type | Method and Description |
---|---|
void |
close() |
void |
configure(JobConf job)
Initializes a new instance from a
JobConf . |
int |
getPartition(org.apache.hadoop.io.IntWritable k,
org.apache.hadoop.io.NullWritable v,
int numPartitions)
Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
|
static void |
main(java.lang.String[] args) |
void |
map(org.apache.hadoop.io.IntWritable key,
org.apache.hadoop.io.IntWritable value,
OutputCollector<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable> output,
Reporter reporter)
Maps a single input key/value pair into an intermediate key/value pair.
|
void |
reduce(org.apache.hadoop.io.IntWritable key,
java.util.Iterator<org.apache.hadoop.io.NullWritable> values,
OutputCollector<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable> output,
Reporter reporter)
Reduces values for a given key.
|
int |
run(int numMapper,
int numReducer,
long mapSleepTime,
int mapSleepCount,
long reduceSleepTime,
int reduceSleepCount) |
int |
run(java.lang.String[] args) |
JobConf |
setupJobConf(int numMapper,
int numReducer,
long mapSleepTime,
int mapSleepCount,
long reduceSleepTime,
int reduceSleepCount) |
public int getPartition(org.apache.hadoop.io.IntWritable k, org.apache.hadoop.io.NullWritable v, int numPartitions)
Partitioner
Typically a hash function on a all or a subset of the key.
getPartition
in interface Partitioner<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable>
k
- the key to be paritioned.v
- the entry value.numPartitions
- the total number of partitions.key
.public void map(org.apache.hadoop.io.IntWritable key, org.apache.hadoop.io.IntWritable value, OutputCollector<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable> output, Reporter reporter) throws java.io.IOException
Mapper
Output pairs need not be of the same types as input pairs. A given
input pair may map to zero or many output pairs. Output pairs are
collected with calls to
OutputCollector.collect(Object,Object)
.
Applications can use the Reporter
provided to report progress
or just indicate that they are alive. In scenarios where the application
takes an insignificant amount of time to process individual key/value
pairs, this is crucial since the framework might assume that the task has
timed-out and kill that task. The other way of avoiding this is to set
mapred.task.timeout to a high-enough value (or even zero for no
time-outs).
map
in interface Mapper<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable>
key
- the input key.value
- the input value.output
- collects mapped keys and values.reporter
- facility to report progress.java.io.IOException
public void reduce(org.apache.hadoop.io.IntWritable key, java.util.Iterator<org.apache.hadoop.io.NullWritable> values, OutputCollector<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable> output, Reporter reporter) throws java.io.IOException
Reducer
The framework calls this method for each
<key, (list of values)>
pair in the grouped inputs.
Output values must be of the same type as input values. Input keys must
not be altered. The framework will reuse the key and value objects
that are passed into the reduce, therefore the application should clone
the objects they want to keep a copy of. In many cases, all values are
combined into zero or one value.
Output pairs are collected with calls to
OutputCollector.collect(Object,Object)
.
Applications can use the Reporter
provided to report progress
or just indicate that they are alive. In scenarios where the application
takes an insignificant amount of time to process individual key/value
pairs, this is crucial since the framework might assume that the task has
timed-out and kill that task. The other way of avoiding this is to set
mapred.task.timeout to a high-enough value (or even zero for no
time-outs).
reduce
in interface Reducer<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable>
key
- the key.values
- the list of values to reduce.output
- to collect keys and combined values.reporter
- facility to report progress.java.io.IOException
public void configure(JobConf job)
JobConfigurable
JobConf
.configure
in interface JobConfigurable
job
- the configurationpublic void close() throws java.io.IOException
close
in interface java.io.Closeable
close
in interface java.lang.AutoCloseable
java.io.IOException
public static void main(java.lang.String[] args) throws java.lang.Exception
java.lang.Exception
public int run(int numMapper, int numReducer, long mapSleepTime, int mapSleepCount, long reduceSleepTime, int reduceSleepCount) throws java.io.IOException
java.io.IOException
public JobConf setupJobConf(int numMapper, int numReducer, long mapSleepTime, int mapSleepCount, long reduceSleepTime, int reduceSleepCount)
public int run(java.lang.String[] args) throws java.lang.Exception
run
in interface org.apache.hadoop.util.Tool
java.lang.Exception
Copyright © 2009 The Apache Software Foundation