Package | Description |
---|---|
org.apache.hadoop.contrib.index.example | |
org.apache.hadoop.contrib.index.mapred | |
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.examples |
Hadoop example code.
|
org.apache.hadoop.examples.dancing |
This package is a distributed implementation of Knuth's dancing links
algorithm that can run under Hadoop.
|
org.apache.hadoop.examples.terasort |
This package consists of 3 map/reduce applications for Hadoop to
compete in the annual terabyte sort
competition.
|
org.apache.hadoop.mapred |
A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
|
org.apache.hadoop.mapred.join |
Given a set of sorted datasets keyed with the same class and yielding equal
partitions, it is possible to effect a join of those datasets prior to the map.
|
org.apache.hadoop.mapred.lib |
Library of generally useful mappers, reducers, and partitioners.
|
org.apache.hadoop.mapred.lib.aggregate |
Classes for performing various counting and aggregations.
|
org.apache.hadoop.mapred.lib.db |
org.apache.hadoop.mapred.lib.db Package
|
org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
Modifier and Type | Method and Description |
---|---|
RecordReader<DocumentID,LineDocTextAndOp> |
LineDocInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
IdentityLocalAnalysis.map(DocumentID key,
DocumentAndOp value,
OutputCollector<DocumentID,DocumentAndOp> output,
Reporter reporter) |
void |
LineDocLocalAnalysis.map(DocumentID key,
LineDocTextAndOp value,
OutputCollector<DocumentID,DocumentAndOp> output,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
void |
IndexUpdateMapper.map(K key,
V value,
OutputCollector<Shard,IntermediateForm> output,
Reporter reporter)
Map a key-value pair to a shard-and-intermediate form pair.
|
void |
IndexUpdateCombiner.reduce(Shard key,
java.util.Iterator<IntermediateForm> values,
OutputCollector<Shard,IntermediateForm> output,
Reporter reporter) |
void |
IndexUpdateReducer.reduce(Shard key,
java.util.Iterator<IntermediateForm> values,
OutputCollector<Shard,org.apache.hadoop.io.Text> output,
Reporter reporter) |
Modifier and Type | Field and Description |
---|---|
protected Reporter |
DataJoinMapperBase.reporter |
protected Reporter |
DataJoinReducerBase.reporter |
Modifier and Type | Method and Description |
---|---|
protected void |
DataJoinReducerBase.collect(java.lang.Object key,
TaggedMapOutput aRecord,
OutputCollector output,
Reporter reporter)
The subclass can overwrite this method to perform additional filtering
and/or other processing logic before a value is collected.
|
void |
DataJoinMapperBase.map(java.lang.Object key,
java.lang.Object value,
OutputCollector output,
Reporter reporter) |
void |
DataJoinReducerBase.map(java.lang.Object arg0,
java.lang.Object arg1,
OutputCollector arg2,
Reporter arg3) |
void |
DataJoinMapperBase.reduce(java.lang.Object arg0,
java.util.Iterator arg1,
OutputCollector arg2,
Reporter arg3) |
void |
DataJoinReducerBase.reduce(java.lang.Object key,
java.util.Iterator values,
OutputCollector output,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
RecordReader<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable> |
SleepJob.SleepInputFormat.getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter) |
void |
SleepJob.map(org.apache.hadoop.io.IntWritable key,
org.apache.hadoop.io.IntWritable value,
OutputCollector<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.NullWritable> output,
Reporter reporter) |
void |
PiEstimator.PiMapper.map(org.apache.hadoop.io.LongWritable offset,
org.apache.hadoop.io.LongWritable size,
OutputCollector<org.apache.hadoop.io.BooleanWritable,org.apache.hadoop.io.LongWritable> out,
Reporter reporter)
Map method.
|
void |
PiEstimator.PiReducer.reduce(org.apache.hadoop.io.BooleanWritable isInside,
java.util.Iterator<org.apache.hadoop.io.LongWritable> values,
OutputCollector<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.io.Writable> output,
Reporter reporter)
Accumulate number of points inside/outside results from the mappers.
|
void |
SleepJob.reduce(org.apache.hadoop.io.IntWritable key,
java.util.Iterator<org.apache.hadoop.io.NullWritable> values,
OutputCollector<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.NullWritable> output,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
void |
DistributedPentomino.PentMap.map(org.apache.hadoop.io.WritableComparable key,
org.apache.hadoop.io.Text value,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter)
Break the prefix string into moves (a sequence of integer row ids that
will be selected for each column in order).
|
Modifier and Type | Method and Description |
---|---|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
TeraInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
TeraGen.SortGenMapper.map(org.apache.hadoop.io.LongWritable row,
org.apache.hadoop.io.NullWritable ignored,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter) |
Modifier and Type | Class and Description |
---|---|
class |
Task.TaskReporter |
Modifier and Type | Field and Description |
---|---|
static Reporter |
Reporter.NULL
A constant of Reporter type that does nothing.
|
Modifier and Type | Method and Description |
---|---|
void |
RecordWriter.close(Reporter reporter)
Close this
RecordWriter to future operations. |
void |
TextOutputFormat.LineRecordWriter.close(Reporter reporter) |
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Deprecated.
|
RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split
|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
void |
Task.initialize(JobConf job,
JobID id,
Reporter reporter,
boolean useNewApi) |
void |
Mapper.map(K1 key,
V1 value,
OutputCollector<K2,V2> output,
Reporter reporter)
Maps a single input key/value pair into an intermediate key/value pair.
|
void |
Reducer.reduce(K2 key,
java.util.Iterator<V2> values,
OutputCollector<K3,V3> output,
Reporter reporter)
Reduces values for a given key.
|
void |
MapRunnable.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter)
Start mapping input <key, value> pairs.
|
void |
MapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
Constructor and Description |
---|
ReduceTask.ReduceCopier.MapOutputCopier(JobConf job,
Reporter reporter,
javax.crypto.SecretKey jobTokenSecret) |
Task.CombineValuesIterator(RawKeyValueIterator in,
org.apache.hadoop.io.RawComparator<KEY> comparator,
java.lang.Class<KEY> keyClass,
java.lang.Class<VALUE> valClass,
org.apache.hadoop.conf.Configuration conf,
Reporter reporter,
Counters.Counter combineInputCounter) |
Modifier and Type | Method and Description |
---|---|
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
|
Modifier and Type | Field and Description |
---|---|
protected Reporter |
CombineFileRecordReader.reporter |
Modifier and Type | Method and Description |
---|---|
OutputCollector |
MultipleOutputs.getCollector(java.lang.String namedOutput,
Reporter reporter)
Gets the output collector for a named output.
|
OutputCollector |
MultipleOutputs.getCollector(java.lang.String namedOutput,
java.lang.String multiName,
Reporter reporter)
Gets the output collector for a multi named output.
|
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet.
|
RecordReader<K,V> |
DelegatingInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter) |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
void |
DelegatingMapper.map(K1 key,
V1 value,
OutputCollector<K2,V2> outputCollector,
Reporter reporter) |
void |
RegexMapper.map(K key,
org.apache.hadoop.io.Text value,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.LongWritable> output,
Reporter reporter) |
void |
TokenCountMapper.map(K key,
org.apache.hadoop.io.Text value,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.LongWritable> output,
Reporter reporter) |
void |
IdentityMapper.map(K key,
V val,
OutputCollector<K,V> output,
Reporter reporter)
The identify function.
|
void |
FieldSelectionMapReduce.map(K key,
V val,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter)
The identify function.
|
void |
InverseMapper.map(K key,
V value,
OutputCollector<V,K> output,
Reporter reporter)
The inverse function.
|
void |
ChainMapper.map(java.lang.Object key,
java.lang.Object value,
OutputCollector output,
Reporter reporter)
Chains the
map(...) methods of the Mappers in the chain. |
void |
LongSumReducer.reduce(K key,
java.util.Iterator<org.apache.hadoop.io.LongWritable> values,
OutputCollector<K,org.apache.hadoop.io.LongWritable> output,
Reporter reporter) |
void |
IdentityReducer.reduce(K key,
java.util.Iterator<V> values,
OutputCollector<K,V> output,
Reporter reporter)
Writes all keys and values directly to output.
|
void |
ChainReducer.reduce(java.lang.Object key,
java.util.Iterator values,
OutputCollector output,
Reporter reporter)
Chains the
reduce(...) method of the Reducer with the
map(...) methods of the Mappers in the chain. |
void |
FieldSelectionMapReduce.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<org.apache.hadoop.io.Text> values,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter) |
void |
MultithreadedMapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
Constructor and Description |
---|
CombineFileRecordReader(JobConf job,
CombineFileSplit split,
Reporter reporter,
java.lang.Class<RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
|
Modifier and Type | Method and Description |
---|---|
void |
ValueAggregatorCombiner.map(K1 arg0,
V1 arg1,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorMapper.map(K1 key,
V1 value,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter)
the map function.
|
void |
ValueAggregatorReducer.map(K1 arg0,
V1 arg1,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorCombiner.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<org.apache.hadoop.io.Text> values,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter)
Combines values for a given key.
|
void |
ValueAggregatorMapper.reduce(org.apache.hadoop.io.Text arg0,
java.util.Iterator<org.apache.hadoop.io.Text> arg1,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorReducer.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<org.apache.hadoop.io.Text> values,
OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
void |
DBOutputFormat.DBRecordWriter.close(Reporter reporter)
Close this
RecordWriter to future operations. |
RecordReader<org.apache.hadoop.io.LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
Modifier and Type | Method and Description |
---|---|
RecordReader |
AutoInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
void |
PipeMapper.map(java.lang.Object key,
java.lang.Object value,
OutputCollector output,
Reporter reporter) |
void |
PipeReducer.reduce(java.lang.Object key,
java.util.Iterator values,
OutputCollector output,
Reporter reporter) |
void |
PipeMapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
Constructor and Description |
---|
StreamBaseRecordReader(org.apache.hadoop.fs.FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
org.apache.hadoop.fs.FileSystem fs) |
StreamXmlRecordReader(org.apache.hadoop.fs.FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
org.apache.hadoop.fs.FileSystem fs) |
Copyright © 2009 The Apache Software Foundation