- CACHE_ARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_ARCHIVES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_ARCHIVES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_ARCHIVES_VISIBILITIES - Static variable in interface org.apache.hadoop.mapreduce.JobContext
-
- CACHE_FILE_VISIBILITIES - Static variable in interface org.apache.hadoop.mapreduce.JobContext
-
- CACHE_FILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_FILES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_FILES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_LOCALARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_LOCALFILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- CACHE_SYMLINK - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
- cacheArchives - Variable in class org.apache.hadoop.streaming.StreamJob
-
- cacheFiles - Variable in class org.apache.hadoop.streaming.StreamJob
-
- cancel(Token<?>, Configuration) - Method in class org.apache.hadoop.mapred.JobClient.Renewer
-
- cancelAllReservations() - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
Cleanup when the
TaskTracker
is declared as 'lost/blacklisted'
by the JobTracker.
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.JobClient
-
Cancel a delegation token from the JobTracker
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.JobTracker
-
Discard a current delegation token.
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- canCommit(TaskAttemptID, JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
Child checking whether it can commit
- canCommit(TaskAttemptID, JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Polling to know whether the task can go-ahead with commit
- captureDebugOut(List<String>, File) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Wrap a command in a shell to capture debug script's
stdout and stderr to debugout.
- captureOutAndError(List<String>, File, File, long) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Wrap a command in a shell to capture stdout and stderr to files.
- captureOutAndError(List<String>, List<String>, File, File, long) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Wrap a command in a shell to capture stdout and stderr to files.
- captureOutAndError(List<String>, List<String>, File, File, long, String) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Deprecated.
pidFiles are no more used. Instead pid is exported to
env variable JVM_PID.
- captureOutAndError(List<String>, List<String>, File, File, long, boolean, String) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Deprecated.
pidFiles are no more used. Instead pid is exported to
env variable JVM_PID.
- captureOutAndError(List<String>, List<String>, File, File, long, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Wrap a command in a shell to capture stdout and stderr to files.
- ChainMapper - Class in org.apache.hadoop.mapred.lib
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapred.lib.ChainMapper
-
Constructor.
- ChainReducer - Class in org.apache.hadoop.mapred.lib
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapred.lib.ChainReducer
-
Constructor.
- checkAndInformJobTracker(int, TaskAttemptID, boolean) - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- checkArgs(String) - Method in class org.apache.hadoop.mapred.SshFenceByTcpPort
-
Verify that the argument, if given, in the conf is parseable.
- checkCounters(int) - Method in class org.apache.hadoop.mapreduce.counters.Limits
-
- checkException(IOException, String, String, TaskTracker.ShuffleServerMetrics) - Method in class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
-
- checkExistence(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Checks whether a specific shell command is available
in the system.
- checkFencingConfigured() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- checkForRotation() - Method in class org.apache.hadoop.contrib.failmon.LogParser
-
Check whether the log file has been rotated.
- checkGroups(int) - Method in class org.apache.hadoop.mapreduce.counters.Limits
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Check for validity of the output-specification for the job.
- checkRpcAdminAccess() - Method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- checkURIs(URI[], URI[]) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
This method checks if there is a conflict in the fragment names
of the uris.
- chooseShardForDelete(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
-
- chooseShardForDelete(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
-
- chooseShardForDelete(DocumentID) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
-
Choose a shard or all shards to send a delete request.
- chooseShardForInsert(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
-
- chooseShardForInsert(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
-
- chooseShardForInsert(DocumentID) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
-
Choose a shard to send an insert request.
- clean(long) - Method in class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
-
- cleanup() - Method in class org.apache.hadoop.contrib.failmon.Executor
-
- cleanup() - Method in class org.apache.hadoop.contrib.failmon.RunOnce
-
- cleanup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the end of the task.
- cleanup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the end of the task.
- cleanupAllVolumes() - Method in class org.apache.hadoop.util.MRAsyncDiskService
-
Move all files/directories inside volume into TOBEDELETED, and then
delete them.
- cleanupDirsInAllVolumes(String[]) - Method in class org.apache.hadoop.util.MRAsyncDiskService
-
Move specified directories/files in each volume into TOBEDELETED, and then
delete them.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
- cleanUpMetrics() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Called when the job is complete
- cleanupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- cleanupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- CleanupQueue - Class in org.apache.hadoop.mapred
-
- CleanupQueue() - Constructor for class org.apache.hadoop.mapred.CleanupQueue
-
Create a singleton path-clean-up queue.
- CleanupQueue.PathDeletionContext - Class in org.apache.hadoop.mapred
-
Contains info related to the path of the file/dir to be deleted
- CleanupQueue.PathDeletionContext(Path, Configuration) - Constructor for class org.apache.hadoop.mapred.CleanupQueue.PathDeletionContext
-
- CleanupQueue.PathDeletionContext(Path, Configuration, UserGroupInformation, JobID, FileSystem) - Constructor for class org.apache.hadoop.mapred.CleanupQueue.PathDeletionContext
-
PathDeletionContext ctor which also allows for a job-delegation token
renewal to be cancelled.
- cleanupStorage() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Deprecated.
- cleanupThread - Variable in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
- cleanUpTokenReferral(Configuration) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
Remove jobtoken referrals which don't make sense in the context
of the task execution.
- clear() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- clear() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- clear() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- clear() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
-
Close datasources, but do not release internal resources.
- clear() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- clear() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- clearOldUserLogs(Configuration) - Method in class org.apache.hadoop.mapred.UserLogCleaner
-
Clears all the logs in userlogs directory.
- clearOldUserLogs(Configuration) - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
-
Called during TaskTracker restart/re-init.
- ClientTraceLog - Static variable in class org.apache.hadoop.mapred.TaskTracker
-
- clone(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- clone() - Method in class org.apache.hadoop.mapred.JobStatus
-
- clone() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- cloneContext(JobContext, Configuration) - Static method in class org.apache.hadoop.mapreduce.ContextFactory
-
- cloneDelegationTokenForLogicalAddress(UserGroupInformation, String, Collection<InetSocketAddress>) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- cloneMapContext(MapContext<K1, V1, K2, V2>, Configuration, RecordReader<K1, V1>, RecordWriter<K2, V2>) - Static method in class org.apache.hadoop.mapreduce.ContextFactory
-
Copy a custom WrappedMapper.Context, optionally replacing
the input and output.
- close() - Method in class org.apache.hadoop.contrib.failmon.LocalStore
-
Close the temporary local file
- close() - Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
-
- close() - Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
-
- close() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- close() - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- close() - Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
-
Close the shard writer.
- close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
-
- close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
- close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
- close() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
-
- close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- close() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
-
- close() - Method in class org.apache.hadoop.examples.MultiFileWordCount.CombineFileLineRecordReader
-
- close() - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
-
Reduce task done, write output to a file.
- close() - Method in class org.apache.hadoop.examples.SleepJob
-
- close() - Method in class org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider
-
Close all the proxy objects which have been opened over the lifetime of
this proxy provider.
- close() - Method in class org.apache.hadoop.mapred.JobClient
-
Close the JobClient
.
- close() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- close() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- close() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- close() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
-
Close datasources and release resources.
- close() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- close() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- close() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Do nothing.
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- close() - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Closes the ChainMapper and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Closes the ChainReducer, the Reducer and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- close(Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat.DBRecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
-
- close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- close() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Closes all the opened named outputs.
- close() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- close() - Method in interface org.apache.hadoop.mapred.MapOutputCollector
-
- close() - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- close() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
- close() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- close() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Closes the iterator so that the underlying streams can be closed.
- close() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- close() - Method in interface org.apache.hadoop.mapred.ShuffleConsumerPlugin
-
close and clean any resource associated with this object.
- close() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- close() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Close down the TaskTracker and all its components.
- close(Reporter) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
-
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Close the record reader.
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat.FilterRecordWriter
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Closes all the opened outputs.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
-
- close() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Close the record reader.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Static method in class org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal
-
removing all tokens renewals
- close() - Method in class org.apache.hadoop.streaming.PipeMapper
-
- close() - Method in class org.apache.hadoop.streaming.PipeReducer
-
- close() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
Close this to future operations.
- closeConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- closeMerger() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- closeWriter() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
Close the Lucene index writer associated with the intermediate form,
if created.
- Cluster - Class in org.apache.hadoop.mapreduce
-
Provides a way to access information about the map/reduce cluster.
- Cluster(Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- Cluster(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- Cluster.JobTrackerStatus - Enum in org.apache.hadoop.mapreduce
-
- ClusterMetrics - Class in org.apache.hadoop.mapreduce
-
Status information on the current state of the Map-Reduce cluster.
- ClusterMetrics() - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterStatus - Class in org.apache.hadoop.mapred
-
Status information on the current state of the Map-Reduce cluster.
- cmpcl - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- collate(Object[], String) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- collate(List, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- collect(Object, TaggedMapOutput, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
The subclass can overwrite this method to perform additional filtering
and/or other processing logic before a value is collected.
- collect(K, V, int) - Method in interface org.apache.hadoop.mapred.MapOutputCollector
-
- collect(K, V, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
- collect(K, V) - Method in interface org.apache.hadoop.mapred.OutputCollector
-
Adds a key/value pair to the output.
- collect(K, V) - Method in class org.apache.hadoop.mapred.Task.CombineOutputCollector
-
- collected - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- combine(Object[], Object[]) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.OuterJoinRecordReader
-
Emit everything from the collector.
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.CombinerRunner
-
Run the combiner over a set of inputs.
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.NewCombinerRunner
-
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.OldCombinerRunner
-
- COMBINE_CLASS_ATTR - Static variable in interface org.apache.hadoop.mapreduce.JobContext
-
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
default constructor
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
default constructor
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapred.lib
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(JobConf, CombineFileSplit, Reporter, Class<RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(CombineFileSplit, TaskAttemptContext, Class<? extends RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileSplit - Class in org.apache.hadoop.mapred.lib
-
A sub-collection of input files.
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
default constructor
- CombineFileSplit(JobConf, Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Copy constructor
- CombineFileSplit - Class in org.apache.hadoop.mapreduce.lib.input
-
A sub-collection of input files.
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
default constructor
- CombineFileSplit(Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(Path[], long[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Copy constructor
- comCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- COMMAND_FILE - Static variable in class org.apache.hadoop.mapred.TaskController
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For committing job's output after successful job completion.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Delete the temporary directory, including all of the work directories.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For cleaning up the job's output after job completion.
- commitPending(TaskAttemptID, TaskStatus, JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
Task is reporting that it is in commit_pending
and it is waiting for the commit Response
- commitPending(TaskAttemptID, TaskStatus, JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report that the task is complete, but its commit is pending.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
To promote the task's temporary output to final output location
The task's output is moved to the job's output directory.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Move the files from the work directory to the job output directory
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
To promote the task's temporary output to final output location
The task's output is moved to the job's output directory.
- committer - Variable in class org.apache.hadoop.mapred.Task
-
- COMPARATOR_OPTIONS - Static variable in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
-
- compare(SecondarySort.IntPair, SecondarySort.IntPair) - Method in class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair.Comparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
-
- compare(int, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
Compare logical range, st i, j MOD offset capacity.
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compareTo(Object) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
- compareTo(Object) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- compareTo(Shard) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
Compare to another shard.
- compareTo(Object) - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
-
- compareTo(SecondarySort.IntPair) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.ID
-
Compare IDs by associated numbers
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.JobID
-
Compare JobIds by first jtIdentifiers, then by job numbers
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Compare TaskIds by first tipIds, then by task numbers.
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Compare TaskInProgressIds by first jobIds, then by tip numbers.
- completed(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Indicate that one of the taskids in this TaskInProgress
has successfully completed!
- completedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
-
- completedTask(TaskInProgress, TaskStatus) - Method in class org.apache.hadoop.mapred.JobInProgress
-
A taskid assigned to this JobInProgress has reported in successfully.
- ComposableInputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Interface in org.apache.hadoop.mapred.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableRecordReader<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Interface in org.apache.hadoop.mapred.join
-
Additional operations required of a RecordReader to participate in a join.
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- CompositeInputFormat<K extends org.apache.hadoop.io.WritableComparable> - Class in org.apache.hadoop.mapred.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputFormat
-
- CompositeInputSplit - Class in org.apache.hadoop.mapred.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeRecordReader<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapred.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompressedSplitLineReader - Class in org.apache.hadoop.mapreduce.lib.input
-
Line reader for compressed splits
Reading records from a compressed split is tricky, as the
LineRecordReader is using the reported compressed input stream
position directly to determine when a split has ended.
- CompressedSplitLineReader(SplitCompressionInputStream, Configuration, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CompressedSplitLineReader
-
- COMPRESSION_SUFFIX - Static variable in class org.apache.hadoop.contrib.failmon.LocalStore
-
- computeHash(byte[], SecretKey) - Static method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Compute the HMAC hash of the message using the key
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapred.Task
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- CONF_CONNECT_TIMEOUT_KEY - Static variable in class org.apache.hadoop.mapred.SshFenceByTcpPort
-
- CONF_IDENTITIES_KEY - Static variable in class org.apache.hadoop.mapred.SshFenceByTcpPort
-
- config_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
- configure(IndexUpdateConfiguration) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
Configure using an index update configuration.
- configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
-
- configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
-
Store job configuration.
- configure(JobConf) - Method in class org.apache.hadoop.examples.SleepJob
-
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Combiner does not need to configure.
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.BinaryPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Configures the ChainMapper and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
Read in the partition file and build indexing data structures.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.streaming.AutoInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapRed
-
- configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeReducer
-
- configureDB(JobConf, String, String, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(JobConf, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(Configuration, String, String, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the Configuration
.
- configureDB(Configuration, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- ConfiguredFailoverProxyProvider<T> - Class in org.apache.hadoop.mapred
-
A FailoverProxyProvider implementation which allows one to configure two URIs
to connect to during fail-over.
- ConfiguredFailoverProxyProvider(Configuration, String, Class<T>) - Constructor for class org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider
-
- constructQuery(String, String[]) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Constructs the query used as the prepared statement to insert data.
- contentEquals(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- context - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- ContextFactory - Class in org.apache.hadoop.mapreduce
-
A factory to allow applications to deal with inconsistencies between
MapReduce Context Objects API between hadoop-0.20 and later versions.
- ContextFactory() - Constructor for class org.apache.hadoop.mapreduce.ContextFactory
-
- Continuous - Class in org.apache.hadoop.contrib.failmon
-
This class runs FailMon in a continuous mode on the local
node.
- Continuous() - Constructor for class org.apache.hadoop.contrib.failmon.Continuous
-
- ControlledJob - Class in org.apache.hadoop.mapreduce.lib.jobcontrol
-
This class encapsulates a MapReduce job and its dependency.
- ControlledJob(Job, List<ControlledJob>) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob.State - Enum in org.apache.hadoop.mapreduce.lib.jobcontrol
-
- copyToHDFS(String, String) - Static method in class org.apache.hadoop.contrib.failmon.LocalStore
-
Copy a local file to HDFS
- countCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the total number of counters, by summing the number of counters
in each group.
- Counter - Interface in org.apache.hadoop.mapreduce
-
A named counter that tracks the progress of a map/reduce job.
- COUNTER_GROUP - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Special counters which are written by the application and are
used by the framework for detecting bad records.
- COUNTER_GROUP_NAME_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUP_NAME_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUPS_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUPS_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_MAP_PROCESSED_RECORDS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed map records.
- COUNTER_NAME_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_NAME_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_REDUCE_PROCESSED_GROUPS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed reduce groups.
- CounterGroup - Interface in org.apache.hadoop.mapreduce
-
A group of
Counter
s that logically belong together.
- CounterGroupBase<T extends Counter> - Interface in org.apache.hadoop.mapreduce.counters
-
The common counter group interface.
- CounterGroupFactory<C extends Counter,G extends CounterGroupBase<C>> - Class in org.apache.hadoop.mapreduce.counters
-
An abstract class to provide common implementation of the
group factory in both mapred and mapreduce packages.
- CounterGroupFactory() - Constructor for class org.apache.hadoop.mapreduce.counters.CounterGroupFactory
-
- CounterGroupFactory.FrameworkGroupFactory<F> - Interface in org.apache.hadoop.mapreduce.counters
-
- Counters - Class in org.apache.hadoop.mapred
-
- Counters() - Constructor for class org.apache.hadoop.mapred.Counters
-
Deprecated.
- Counters(Counters) - Constructor for class org.apache.hadoop.mapred.Counters
-
Deprecated.
- Counters - Class in org.apache.hadoop.mapreduce
-
Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.
- Counters() - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Default constructor
- Counters(AbstractCounters<C, G>) - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Construct the Counters object from the another counters object
- Counters.Counter - Class in org.apache.hadoop.mapred
-
Deprecated.
A counter record, comprising its name and value.
- Counters.Counter() - Constructor for class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- Counters.Group - Class in org.apache.hadoop.mapred
-
Deprecated.
Group
of counters, comprising of counters from a particular
counter Enum
class.
- COUNTERS_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTERS_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CountersStrings - Class in org.apache.hadoop.mapreduce.util
-
String conversion utilities for counters.
- CountersStrings() - Constructor for class org.apache.hadoop.mapreduce.util.CountersStrings
-
- countMapTasks() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of running map tasks.
- countOccupiedMapSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of occupied map slots.
- countOccupiedReduceSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of occupied reduce slots.
- countReduceTasks() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of running reduce tasks.
- CPUParser - Class in org.apache.hadoop.contrib.failmon
-
Objects of this class parse the /proc/cpuinfo file to
gather information about present processors in the system.
- CPUParser() - Constructor for class org.apache.hadoop.contrib.failmon.CPUParser
-
Constructs a CPUParser
- create(JobConf, TaskAttemptID, Counters.Counter, Task.TaskReporter, OutputCommitter) - Static method in class org.apache.hadoop.mapred.Task.CombinerRunner
-
- create(Configuration) - Static method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
Internal to MapReduce framework. Use DistributedCacheManager
instead.
- createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
This method create symlinks for all files in a given dir in another
directory.
- createDataJoinJob(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFromConfiguration(Configuration, String, Class<X>) - Static method in class org.apache.hadoop.util.PluginDispatcher
-
Load a PluginDispatcher from the given Configuration.
- createIdentifier() - Method in class org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenSecretManager
-
- createIdentifier() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Create an empty job token identifier
- createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInternalValue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createJob(String[]) - Static method in class org.apache.hadoop.streaming.StreamJob
-
This method creates a streaming job from the given argument list.
- createJobDirs() - Method in class org.apache.hadoop.mapred.JobLocalizer
-
Prepare the job directories for a given job.
- createKey() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a new key value common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new key from proxied RR.
- createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- createKVIterator() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
Create a RawKeyValueIterator from copied map outputs.
- createKVIterator() - Method in interface org.apache.hadoop.mapred.ShuffleConsumerPlugin
-
To create a key-value iterator to read the merged output.
- createLocalDirs() - Method in class org.apache.hadoop.mapred.JobLocalizer
-
- createLogDir(TaskAttemptID, boolean) - Method in class org.apache.hadoop.mapred.DefaultTaskController
-
- createLogDir(TaskAttemptID, boolean) - Method in class org.apache.hadoop.mapred.TaskController
-
Creates task log dir
- createNonHAProxy(Configuration, InetSocketAddress, Class<T>, UserGroupInformation, boolean) - Static method in class org.apache.hadoop.mapred.JobTrackerProxies
-
- createOutput(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- createPassword(JobTokenIdentifier) - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Create a new password/secret for the given job token identifier.
- createPool(JobConf, List<PathFilter>) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(JobConf, PathFilter...) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(List<PathFilter>) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(PathFilter...) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createProxy(Configuration, String, Class<T>) - Static method in class org.apache.hadoop.mapred.JobTrackerProxies
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
This is not implemented yet.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
Create a record reader for the given split
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
- createReduceContext(Reducer<INKEY, INVALUE, OUTKEY, OUTVALUE>, Configuration, TaskAttemptID, RawKeyValueIterator, Counter, Counter, RecordWriter<OUTKEY, OUTVALUE>, OutputCommitter, StatusReporter, RawComparator<INKEY>, Class<INKEY>, Class<INVALUE>) - Static method in class org.apache.hadoop.mapred.Task
-
- createResetableIterator() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
The subclass can provide a different implementation on ResetableIterator.
- createRunner(TaskTracker, TaskTracker.TaskInProgress, TaskTracker.RunningJob) - Method in class org.apache.hadoop.mapred.MapTask
-
- createRunner(TaskTracker, TaskTracker.TaskInProgress, TaskTracker.RunningJob) - Method in class org.apache.hadoop.mapred.ReduceTask
-
- createRunner(TaskTracker, TaskTracker.TaskInProgress, TaskTracker.RunningJob) - Method in class org.apache.hadoop.mapred.Task
-
Return an approprate thread runner for this task.
- createSecretKey(byte[]) - Static method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Convert the byte[] to a secret key
- createSplitFiles(Path, Configuration, FileSystem, List<InputSplit>) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSplitFiles(Path, Configuration, FileSystem, T[]) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSplitFiles(Path, Configuration, FileSystem, InputSplit[]) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
This method allows you to create symlinks in the current working directory
of the task to all the cache files/archives.
- createTaskAttemptLogDir(TaskAttemptID, boolean, String[]) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Create log directory for the given attempt.
- createUserDirs() - Method in class org.apache.hadoop.mapred.JobLocalizer
-
Initialize the local directories for a particular user on this TT.
- createValue() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new value from proxied RR.
- createValue() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- createValue() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createWorkDir(JobConf) - Method in class org.apache.hadoop.mapred.JobLocalizer
-
- credentials - Variable in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- curReader - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- curReader - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- gcd(int, int) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Determines the greatest common divisor (GCD) of two integers.
- gcd(int[]) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Determines the greatest common divisor (GCD) of a list
of integers.
- generateActualKey(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual key from the given key/value.
- generateActualValue(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual value from the given key and value.
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateFileNameForKeyValue(K, V, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the file output file name based on the given key and the leaf file
name.
- generateGroupKey(TaggedMapOutput) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
Generate a map output key.
- generateHash(byte[], SecretKey) - Static method in class org.apache.hadoop.mapreduce.security.SecureShuffleUtils
-
Base64 encoded hash of msg
- generateInputTag(String) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
Determine the source tag based on the input file name.
- generateJobTable(JspWriter, String, List<JobInProgress>) - Method in class org.apache.hadoop.mapreduce.server.jobtracker.JobTrackerJspHelper
-
Returns an XML-formatted table of the jobs in the list.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.examples.AggregateWordCount.WordCountPlugInClass
-
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.examples.AggregateWordHistogram.AggregateWordHistogramPlugin
-
Parse the given value, generate an aggregation-id/value pair per word.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for the given key/value pairs
by delegating the invocation to the real object.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
- generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for the given key/value pair.
- generateLeafFileName(String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the leaf name for the output file name.
- generateSingleReport() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Creates a "status report" for this task.
- generateSummaryTable(JspWriter, JobTracker) - Method in class org.apache.hadoop.mapreduce.server.jobtracker.JobTrackerJspHelper
-
Generates an XML-formatted block that summarizes the state of the JobTracker.
- generateTaggedMapOutput(Object) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
Generate a tagged map output value.
- generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generationFromSegmentsFileName(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
Parse the generation off the segments file name and return it.
- GenericCounter - Class in org.apache.hadoop.mapreduce.counters
-
A generic counter implementation
- GenericCounter() - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- GenericCounter(String, String) - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- GenericCounter(String, String, long) - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- get(String) - Method in class org.apache.hadoop.contrib.failmon.EventRecord
-
Get the value of a property of the EventRecord.
- get(String) - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
-
Get the value of a property of the EventRecord.
- get(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapred.join.TupleWritable
-
Get ith Writable from Tuple.
- get(DataInput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesInput
-
Get a thread-local typed bytes input for the supplied DataInput
.
- get(DataOutput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesOutput
-
Get a thread-local typed bytes output for the supplied DataOutput
.
- get(TypedBytesInput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
-
Get a thread-local typed bytes record input for the supplied
TypedBytesInput
.
- get(DataInput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
-
Get a thread-local typed bytes record input for the supplied
DataInput
.
- get(TypedBytesOutput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
-
- get(DataOutput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
-
Get a thread-local typed bytes record output for the supplied
DataOutput
.
- get(TypedBytesInput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
-
Get a thread-local typed bytes writable input for the supplied
TypedBytesInput
.
- get(DataInput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
-
Get a thread-local typed bytes writable input for the supplied
DataInput
.
- get(TypedBytesOutput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableOutput
-
Get a thread-local typed bytes writable input for the supplied
TypedBytesOutput
.
- get(DataOutput) - Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableOutput
-
Get a thread-local typed bytes writable output for the supplied
DataOutput
.
- getAbsolutePath(String) - Method in class org.apache.hadoop.streaming.PathFinder
-
Returns the full path name of this file if it is listed in the
path
- getAclName() - Method in enum org.apache.hadoop.mapreduce.JobACL
-
Get the name of the ACL.
- getActiveTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getAddress(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
-
- getAddress() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- getAliveNodesInfoJson() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getAliveNodesInfoJson() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getAllAttempts() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.JVMInfo
-
- getAllJobs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get the jobs that are submitted.
- getAllJobs() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getAllJobs() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getAllTasks() - Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Returns all map and reduce tasks .
- getArchiveClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get the archive entries in classpath as an array of Path.
- getArchiveClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the archive entries in classpath as an array of Path
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the archive entries in classpath as an array of Path
- getArchiveTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get the timestamps of the archives.
- getArchiveTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the archives.
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the timestamps of the archives.
- getArchiveVisibilities(Configuration) - Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
Get the booleans on whether the archives are public or not.
- getAssignedJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- getAssignedTracker(TaskAttemptID) - Method in class org.apache.hadoop.mapred.JobTracker
-
Get tracker name for a given task id.
- getAttemptsToStartSkipping(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of Task attempts AFTER which skip mode
will be kicked off.
- getAutoIncrMapperProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAutoIncrReducerProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAvailableMapSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get available map slots.
- getAvailablePhysicalMemorySize() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the total size of the available physical memory present
in the system.
- getAvailablePhysicalMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the total size of the available physical memory present
in the system.
- getAvailableReduceSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get available reduce slots.
- getAvailableSlots(TaskType) - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
Get the number of currently available slots on this tasktracker for the
given type of the task.
- getAvailableSpace() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Will return LONG_MAX if space hasn't been measured yet.
- getAvailableVirtualMemorySize() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the total size of the available virtual memory present
in the system.
- getAvailableVirtualMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the total size of the available virtual memory present
in the system.
- getBasePathInJarOut(String) - Method in class org.apache.hadoop.streaming.JarBuilder
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- getBlacklistedNodesInfoJson() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getBlacklistedNodesInfoJson() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getBlackListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of blacklisted trackers in the cluster.
- getBlacklistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getBlacklistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of blacklisted task trackers in the cluster.
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getBoundAntProperty(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getBoundingValsQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getBuildVersion() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getBundle(String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get a resource bundle
- getCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get cache archives set in the Configuration.
- getCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache archives set in the Configuration
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get cache archives set in the Configuration
- getCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get cache files set in the Configuration.
- getCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache files set in the Configuration
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get cache files set in the Configuration
- getCategory(List<List<Pentomino.ColumnName>>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Find whether the solution has the x in the upper left quadrant, the
x-midline, the y-midline or in the center.
- getClassByName(String) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
-
- getClassPaths() - Method in class org.apache.hadoop.filecache.TaskDistributedCacheManager
-
Retrieves class paths (as local references) to add.
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the cleanup tasks of a job.
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getClientInput() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the DataInput from which the client output is read.
- getClientOutput() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the DataOutput to which the client input is written.
- getClock() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getClusterMetrics() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getClusterNick() - Method in class org.apache.hadoop.streaming.StreamJob
-
Deprecated.
- getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getCollector(String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a named output.
- getCollector(String, String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a multi named output.
- getColumnName(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
-
Get the name of a given column as a string
- getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- getCombinerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the combiner class for the job.
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the combiner class for the job.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined WritableComparable
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getCombinerOutput() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getComparator() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getCompletedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
-
Are the outputs of the maps be compressed?
- getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Is the job output compressed?
- getCompressOutput(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Is the job output compressed?
- getConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getConf() - Method in class org.apache.hadoop.mapred.JobTracker
-
Returns a handle to the JobTracker's Configuration
- getConf() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- getConf() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- getConf() - Method in class org.apache.hadoop.mapred.lib.InputSampler
-
- getConf() - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
-
- getConf() - Method in class org.apache.hadoop.mapred.Task
-
- getConf() - Method in class org.apache.hadoop.mapred.TaskController
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.FilterBase
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getConf() - Method in class org.apache.hadoop.streaming.DumpTypedBytes
-
- getConf() - Method in class org.apache.hadoop.streaming.LoadTypedBytes
-
- getConf() - Method in class org.apache.hadoop.streaming.StreamJob
-
- getConf() - Method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
-
- getConfiguration() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the underlying configuration object.
- getConfiguration() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the configuration for the job.
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the configuration for the job.
- getConfiguration() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the Configuration.
- getConfigVersion() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getConfigVersion() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getConfigVersion() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getConfigVersion() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Returns a connection object o the DB
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getContext() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
- getCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
Returns current value of the specified counter, or 0 if the counter
does not exist.
- getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- getCounter(int, String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(String, String) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
- getCounter(String, String) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCounter(Enum) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
counterName
.
- getCounter(String, String) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
groupName
and
counterName
.
- getCounterForName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
Get the counter for the given name and create it if it doesn't exist.
- getCounterGroupName(String, String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get the counter group display name
- getCounterName(String, String, String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get the counter display name
- getCounterNameMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getCounters() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Returns the total job counters, by adding together the job,
the map and the reduce counters.
- getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the counters for this job.
- getCounters() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the task's counters
- getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
-
A table of counters.
- getCounters() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get task's counters.
- getCounters() - Method in class org.apache.hadoop.mapreduce.Job
-
Gets the counters for this job.
- getCountersEnabled(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountersEnabled(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountersMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Returns the query for getting the total number of rows,
subclasses can override this for custom behaviour.
- getCpuFrequency() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the CPU frequency of this TaskTracker
Will return UNAVAILABLE if it cannot be obtained
- getCpuFrequency() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the CPU frequency of on the system.
- getCpuFrequency() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the CPU frequency of on the system.
- getCpuUsage() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the CPU usage on this TaskTracker
Will return UNAVAILABLE if it cannot be obtained
- getCpuUsage() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the CPU usage % of the machine.
- getCpuUsage() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the CPU usage % of the machine.
- getCredentials() - Method in class org.apache.hadoop.mapred.JobConf
-
Get credentials for the job.
- getCredentials() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get credentials for the job.
- getCredentials() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCredentials() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCredentials() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getCumulativeCpuTime() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the cumulative CPU time on this TaskTracker since it is up
Will return UNAVAILABLE if it cannot be obtained
- getCumulativeCpuTime() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the cumulative CPU time since the system is on.
- getCumulativeCpuTime() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the CPU time in millisecond used by all the processes in the
process-tree since the process-tree created
- getCumulativeCpuTime() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the cumulative CPU time since the system is on.
- getCumulativeCpuTime() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
-
Obtain the cumulative CPU time used by a current process tree.
- getCumulativeRssmem() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the cumulative resident set size (rss) memory used by all the processes
in the process-tree.
- getCumulativeRssmem(int) - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the cumulative resident set size (rss) memory used by all the processes
in the process-tree that are older than the passed in age.
- getCumulativeVmem() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the cumulative virtual memory used by all the processes in the
process-tree.
- getCumulativeVmem(int) - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the cumulative virtual memory used by all the processes in the
process-tree that are older than the passed in age.
- getCurrentKey() - Method in class org.apache.hadoop.examples.MultiFileWordCount.CombineFileLineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
Get the current key.
- getCurrentKey() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current key.
- getCurrentKey() - Method in class org.apache.hadoop.streaming.io.OutputReader
-
Returns the current key.
- getCurrentKey() - Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getCurrentKey() - Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getCurrentKey() - Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getCurrentSegmentGeneration(Directory) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
Get the generation (N) of the current segments_N file in the directory.
- getCurrentSegmentGeneration(String[]) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
Get the generation (N) of the current segments_N file from a list of
files.
- getCurrentSplit(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getCurrentStatus() - Method in class org.apache.hadoop.mapred.TaskReport
-
The current status
- getCurrentValue() - Method in class org.apache.hadoop.examples.MultiFileWordCount.CombineFileLineRecordReader
-
- getCurrentValue(V) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
Get the current value.
- getCurrentValue() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.streaming.io.OutputReader
-
Returns the current value.
- getCurrentValue() - Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getCurrentValue() - Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getCurrentValue() - Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getData() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getDBProductName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDecommissionedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of decommissioned trackers in the cluster.
- getDefaultMaps() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Maps in the cluster.
- getDefaultReduces() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Reduces in the cluster.
- getDefaultWorkFile(TaskAttemptContext, String) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the default path and filename for the output format.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Return an iterator wrapping the JoinCollector.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Return an iterator returning a single value from the tuple.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapred.JobClient
-
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapred.JobTracker
-
Get a new delegation token.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getDelegationTokens(Configuration, Credentials) - Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
For each archive or cache file - get the corresponding delegation token
- getDelegationTokenSecretManager() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getDelegationTokenService() - Method in class org.apache.hadoop.mapred.JobTrackerProxies.ProxyAndInfo
-
- getDependentJobs() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- getDiagnosticInfo(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the diagnostic messages for a given task within this tip.
- getDiagnosticInfo() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getDiagnostics() - Method in class org.apache.hadoop.mapred.TaskReport
-
A list of error messages.
- getDirectory() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
Get the ram directory of the intermediate form.
- getDirectory() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
Get the directory where this shard resides.
- getDirFailures() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of local directories that have failed on this tracker.
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
Get the display name of the counter.
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the display name of the group.
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getDistributionPolicyClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the distribution policy class.
- getDocument() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Get the document.
- getDocumentAnalyzerClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the analyzer class.
- getDoubleValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
- getEnd() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getEventId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns event Id.
- getEventType() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogEvent
-
- getExecFinishTime() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return the exec finish time
- getExecStartTime() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return the exec start time
- getExecutable(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Get the URI of the application's executable.
- getFailedJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getFailedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getFailureInfo() - Method in class org.apache.hadoop.mapred.JobStatus
-
gets any available info on the reason of failure of the job.
- getFailureInfo() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get failure info for the job.
- getFailures() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the number of tasks that have failed on this tracker.
- getFencer() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- getFetchFailedMaps() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get the list of maps from which output-fetches failed.
- getFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getFieldSeparator() - Method in class org.apache.hadoop.streaming.PipeMapper
-
- getFieldSeparator() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the field separator to be used.
- getFieldSeparator() - Method in class org.apache.hadoop.streaming.PipeReducer
-
- getFileBlockLocations(FileSystem, FileStatus) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getFileClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get the file entries in classpath as an array of Path.
- getFileClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the file entries in classpath as an array of Path
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the file entries in classpath as an array of Path
- getFileStatus(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Returns FileStatus
of a given cache file on hdfs.
- getFileSystemCounterNames(String) - Static method in class org.apache.hadoop.mapred.Task
-
Counters to measure the usage of the different file systems.
- getFilesystemName() - Method in class org.apache.hadoop.mapred.JobTracker
-
Grab the local fs name
- getFilesystemName() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getFileTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Get the timestamps of the files.
- getFileTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the files.
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the timestamps of the files.
- getFileVisibilities(Configuration) - Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
Get the booleans on whether the files are public or not.
- getFinalSync(JobConf) - Static method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
-
Does the user want a final sync at close?
- getFinishTime() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getFinishTime() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get finish time of task.
- getFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get task finish time.
- getFirst() - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
- getFlippable() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the lower bound on split size imposed by the format.
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- getFrameworkGroupId(String) - Static method in class org.apache.hadoop.mapreduce.counters.CounterGroupFactory
-
Get the id of a framework group
- getFs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a filesystem handle.
- getGeneration() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
Get the generation of the Lucene instance.
- getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
- getGroup(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the named counter group, or an empty group if there is none
with the specified name.
- getGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupNameMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
- getGroupNames() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the names of all counter classes.
- GetGroups - Class in org.apache.hadoop.mapred.tools
-
MR implementation of a tool for getting the groups which a given user
belongs to.
- GetGroupsBase - Class in org.apache.hadoop.mr1tools
-
Base class for the HDFS and MR implementations of tools which fetch and
display the groups that users belong to.
- GetGroupsBase(Configuration) - Constructor for class org.apache.hadoop.mr1tools.GetGroupsBase
-
Create an instance of this tool using the given configuration.
- GetGroupsBase(Configuration, PrintStream) - Constructor for class org.apache.hadoop.mr1tools.GetGroupsBase
-
Used exclusively for testing.
- getGroupsForUser(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getGroupsForUser(String) - Method in interface org.apache.hadoop.mr1tools.GetUserMappingsProtocol
-
Get the groups which are mapped to the given user.
- getGroupsMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getHadoopClientHome() - Method in class org.apache.hadoop.streaming.StreamJob
-
- getHaJtRpcAddresses(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getHealthStatus() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Returns health status of the task tracker.
- getHistoryFilePath(JobID) - Static method in class org.apache.hadoop.mapred.JobHistory
-
Given the job id, return the history file path from the cache
- getHost() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputLocation
-
- getHost() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getHost() - Method in class org.apache.hadoop.streaming.Environment
-
- getHostname() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getHostname() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getHostname() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getHostname() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getHttpPort() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getHttpPort() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getHttpPort() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the port that this task tracker is serving http requests on.
- getID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the job identifier.
- getId() - Method in class org.apache.hadoop.mapreduce.ID
-
returns the int which represents the identifier
- getIdWithinJob() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the id of this map or reduce task.
- getIncludeAllCounters() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getIndex(int) - Method in class org.apache.hadoop.mapred.SpillRecord
-
Get spill offsets for given partition.
- getIndexInputFormatClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the index input format class.
- getIndexMaxFieldLength() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the max field length for a Lucene instance.
- getIndexMaxNumSegments() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the max number of segments for a Lucene instance.
- getIndexShards() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the string representation of a number of shards.
- getIndexShards(IndexUpdateConfiguration) - Static method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- getIndexUpdaterClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the index updater class.
- getIndexUseCompoundFile() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Check whether to use the compound file format for a Lucene instance.
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.CPUParser
-
Return a String with information about this class
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.HadoopLogParser
-
Return a String with information about this class
- getInfo() - Method in interface org.apache.hadoop.contrib.failmon.Monitored
-
Return a String with information about the implementing
class
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.NICParser
-
Return a String with information about this class
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.SensorsParser
-
Return a String with information about this class
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.SMARTParser
-
Return a String with information about this class
- getInfo() - Method in class org.apache.hadoop.contrib.failmon.SystemLogParser
-
Return a String with information about this class
- getInfoPort() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getInputBoundingQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputClass() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputDataLength() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getInputDataLength() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getInputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local reduce input file created earlier
- getInputFileBasedOutputFileName(JobConf, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the outfile name based on a given anme and the input file name.
- getInputFileForWrite(TaskID, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local reduce input file name.
- getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getInputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getInputOrderBy() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputPathFilter(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPathFilter(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPaths(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get the list of input Path
s for the map-reduce job.
- getInputPaths(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the list of input Path
s for the map-reduce job.
- getInputQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputSeparator() - Method in class org.apache.hadoop.streaming.PipeMapper
-
- getInputSeparator() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the input separator to be used.
- getInputSeparator() - Method in class org.apache.hadoop.streaming.PipeReducer
-
- getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
-
- getInputSplit() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getInputSplit() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
Get the input split for this map.
- getInputSplit() - Method in interface org.apache.hadoop.mapreduce.MapContext
-
Get the input split for this map.
- getInputSplit() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
Get the input split for this map.
- getInputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputWriterClass() - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
- getInstance() - Static method in class org.apache.hadoop.mapred.CleanupQueue
-
- getInstance() - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
A Job will be created with a generic
Configuration
.
- getInstance(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
with a given
Configuration
.
- getInstance(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
with a given
Configuration
and a given jobName.
- getInstrumentationClass(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
-
- getInstrumentationClass(Configuration) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getInterface() - Method in class org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider
-
- getIntermediateOutputDir(String, String, String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getInterval(ArrayList<MonitorJob>) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Determines the minimum interval at which the executor thread
needs to wake upto execute jobs.
- getIOSortMB() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the IO sort space in MB.
- getIsCleanup() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Get whether task is cleanup attempt or not.
- getIsJavaMapper(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Mapper.
- getIsJavaRecordReader(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java RecordReader
- getIsJavaRecordWriter(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Will the reduce use a Java RecordWriter?
- getIsJavaReducer(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Reducer.
- getIsMap() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getJar() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user jar for the map-reduce job.
- getJar() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the pathname of the job's jar.
- getJar() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the pathname of the job's jar.
- getJar() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJar() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJar() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the pathname of the job's jar.
- getJarUnpackPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the pattern for jar contents to unpack on the tasktracker
- getJob(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJob() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getJob() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return the parent job
- getJob() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobACLs() - Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Get the job acls.
- getJobACLs() - Method in class org.apache.hadoop.mapred.JobStatus
-
Get the acls for Job.
- getJobCacheSubdir(String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobClient() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- getJobClient() - Method in class org.apache.hadoop.mapred.TaskTracker
-
The connection to the JobTracker, used by the TaskRunner
for locating remote files.
- getJobCompletionTime() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
-
Get the job completion time-stamp in milli-seconds.
- getJobConf() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the job Configuration
- getJobConf() - Method in class org.apache.hadoop.mapred.JobContextImpl
-
Deprecated.
Get the job Configuration
- getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- getJobConf() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getJobConf() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getJobConf() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
Deprecated.
- getJobConf() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
- getJobConf(JobID) - Method in class org.apache.hadoop.mapred.TaskTracker
-
Get the specific job conf for a running job.
- getJobConf() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Get the default job conf for this tracker.
- getJobConfPath(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job conf path.
- getJobCounters() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Returns the job-level counters.
- getJobCounters(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobCounters(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getJobDir(String) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get the user log directory for the job jobid.
- getJobDir(JobID) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get the user log directory for the job jobid.
- getJobDistCacheArchives(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache archives path.
- getJobDistCacheFiles(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache files path.
- getJobDistCacheLibjars(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache libjars path.
- getJobEndNotificationURI() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- getJobFile() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the configuration file for the job.
- getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the path of the submitted job configuration.
- getJobFile() - Method in class org.apache.hadoop.mapred.Task
-
- getJobForFallowSlot(TaskType) - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
- getJobHistoryFileName(JobConf, JobID) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Recover the job history filename from the history folder.
- getJobHistoryFileNameParts(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
- getJobHistoryLogLocation(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Get the job history file path given the history filename
- getJobHistoryLogLocationForUser(String, JobConf) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Get the user job history file path
- getJobID() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobID() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the job id.
- getJobId() - Method in class org.apache.hadoop.mapred.JobProfile
-
Deprecated.
use getJobID() instead
- getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
-
Deprecated.
use getJobID instead
- getJobID() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Deprecated.
This method is deprecated and will be removed. Applications should
rather use RunningJob.getID()
.
- getJobID() - Method in class org.apache.hadoop.mapred.Task
-
Get the job name for this task.
- getJobID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getJobID() - Method in class org.apache.hadoop.mapred.TaskID
-
- getJobID() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the unique ID for the job.
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobId() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
-
Get the jobid
- getJobID() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.DeleteJobEvent
-
Get the jobid.
- getJobID() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
-
Get the job id.
- getJobID() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobStartedEvent
-
Get the job id.
- getJobID() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the unique ID for the job.
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
JobID
object that this task attempt belongs to
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Returns the
JobID
object that this tip belongs to
- getJobIDsPattern(String, Integer) - Static method in class org.apache.hadoop.mapred.JobID
-
Deprecated.
- getJobJar(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job jar path.
- getJobJarFile(String, String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobLocalDir() - Method in class org.apache.hadoop.mapred.JobConf
-
Get job-specific shared directory for use as scratch space
- getJobName() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the user-specified job name.
- getJobName() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the name of the job.
- getJobName() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user-specified job name.
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobConf
-
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobStatus
-
Return the priority of the job
- getJobProfile(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobProfile(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getJobRunState(int) - Static method in class org.apache.hadoop.mapred.JobStatus
-
Helper method to get human-readable state of the job.
- getJobs() - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Scans the configuration file to determine which monitoring
utilities are available in the system.
- getJobSetupCleanupNeeded() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether job-setup and job-cleanup is needed for the job
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get whether job-setup and job-cleanup is needed for the job
- getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets all the jobs which were added to particular Job Queue
- getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getJobSplitFile(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobSplitMetaFile(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobState() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns the current state of the Job.
- getJobState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobStatus(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobStatus(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getJobStatus() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns a snapshot of the current status,
JobStatus
, of the Job.
- getJobSubmitHostAddress() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobSubmitHostName() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobToken(Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- getJobTokenSecret() - Method in class org.apache.hadoop.mapred.Task
-
Get the job token secret
- getJobTracker() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- getJobTrackerHAServiceProtocol() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- getJobTrackerHostPort() - Method in class org.apache.hadoop.streaming.StreamJob
-
- getJobTrackerId(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
Get the jobtracker Id by matching the addressKey
with the the address of the local node.
- getJobTrackerId() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- getJobTrackerIdOfOtherNode(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
Get the jobtracker Id of the other node in an HA setup.
- getJobTrackerMachine() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobTrackerStatus() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the JobTracker's status.
- getJobTrackerUrl() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobTrackerUrl() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getJtHaHttpRedirectAddress(Configuration, String) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getJtHaRpcAddress(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getJtHaRpcAddress(Configuration, String) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getJtIdentifier() - Method in class org.apache.hadoop.mapreduce.JobID
-
- getJvmContext() - Method in class org.apache.hadoop.mapred.Task
-
Gets the task JvmContext
- getJvmInfo() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JvmFinishedEvent
-
Get the jvm info.
- getJvmManagerInstance() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getKeepCommandFile(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
- getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the temporary files for failed tasks be kept?
- getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the regular expression that is matched against the task names
to see if we need to keep the files.
- getKey() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getKey() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the current raw key.
- getKey() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getKeyClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the key class for this SequenceFile.
- getKeyClassName() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the key class for this SequenceFile.
- getKeyFieldComparatorOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldComparatorOption(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getKeyFieldPartitionerOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldPartitionerOption(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier.Renewer
-
- getLastOutput() - Method in class org.apache.hadoop.streaming.io.OutputReader
-
Returns the last output from the client as a String.
- getLastOutput() - Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getLastOutput() - Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getLastOutput() - Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getLastSeen() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getLaunchTime() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getLength() - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
-
- getLength() - Method in class org.apache.hadoop.mapred.FileSplit
-
The number of bytes in the file to process.
- getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the total number of bytes in the data of the InputSplit
.
- getLength() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Return the aggregate length of all child InputSplits currently added.
- getLength(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get the length of ith child InputSplit.
- getLength() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getLength(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns the length of the ith Path
- getLength() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the size of the split, so that the input splits can be sorted by size.
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- getLength(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the length of the ith Path
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The number of bytes in the file to process.
- getLengths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns an array containing the lengths of the files in the split
- getLengths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the lengths of the files in the split
- getLocalAnalysisClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the local analysis class.
- getLocalCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Return the path array of the localized caches.
- getLocalCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the path array of the localized caches
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the path array of the localized caches
- getLocalCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Return the path array of the localized files.
- getLocalCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the path array of the localized files
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the path array of the localized files
- getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
-
- getLocalDirs() - Method in class org.apache.hadoop.mapred.TaskController
-
- getLocalJobDir(String, String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocalJobFilePath(JobID) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Get the path of the locally stored job file
- getLocalJobFilePath(JobID) - Static method in class org.apache.hadoop.mapred.JobTracker
-
Get the localized job file path on the job trackers local file system
- getLocalMaxRunningMaps(JobContext) - Static method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Constructs a local file name.
- getLocalTaskDir(String, String, String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocalTaskDir(String, String, String, boolean) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocation(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
getLocations from ith InputSplit.
- getLocation() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputCopier
-
Get the current map output location.
- getLocations() - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
-
- getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the list of hostnames where the input split is located.
- getLocations() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Collect a set of hosts from all child InputSplits.
- getLocations() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns all the Paths where this input-split resides
- getLocations() - Method in class org.apache.hadoop.mapred.MultiFileSplit
-
Deprecated.
- getLocations() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the list of nodes by name where the data for the split would be local.
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
Get the list of nodes by name where the data for the split would be local.
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths where this input-split resides
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getLogicalName(String) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getLogicalName(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getLogicalName() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- getLogLocation() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.JVMInfo
-
- getLongValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
- getLowerClause() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getMap() - Method in class org.apache.hadoop.contrib.failmon.EventRecord
-
Return the HashMap of properties of the EventRecord.
- getMapCompletionEvents(JobID, int, int, TaskAttemptID, JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getMapCompletionEvents(JobID, int, int, TaskAttemptID, JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Called by a reduce task to get the map output locations for finished maps.
- getMapContext(MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper
-
Get a wrapped Mapper.Context
for custom implementations.
- getMapCounters() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Returns map phase counters by summing over all map tasks in progress.
- getMapDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the map task's debug script.
- getMapInputSize() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
-
Get the CompressionCodec
for compressing the map outputs.
- getMapOutputFile() - Method in class org.apache.hadoop.mapred.Task
-
- getMapOutputKeyClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
Get the map output key class.
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the map output data.
- getMapOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the map output data.
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the key class for the map output data.
- getMapOutputValueClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
Get the map output value class.
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for the map output data.
- getMapOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for the map output data.
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the value class for the map output data.
- getMapper() - Method in class org.apache.hadoop.mapred.MapRunner
-
- getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the
Mapper
class for the job.
- getMapperClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the
Mapper
class for the job.
- getMapperClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Get the application's mapper class.
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the
Mapper
class for the job.
- getMapperMaxSkipRecords(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- getMapredJobID() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMapredTempDir() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Get the Map/Reduce temp directory.
- getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMapSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of map slots in the cluster.
- getMapSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for map tasks?
Defaults to true
.
- getMapTask() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getMapTaskCompletionEvents() - Method in class org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate
-
- getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the map tasks of a job.
- getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running map tasks in the cluster.
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
- getMaxMapAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
- getMaxMapSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the maximum map slots for this node.
- getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of map tasks that can fail without
the job being aborted.
- getMaxMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running map tasks in the cluster.
- getMaxMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getMaxPhysicalMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
this variable is deprecated and nolonger in use.
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
- getMaxReduceAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
- getMaxReduceSlots() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the maximum reduce slots for this node.
- getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of reduce tasks that can fail without
the job being aborted.
- getMaxReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running reduce tasks in the cluster.
- getMaxSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the maximum split size.
- getMaxStringSize() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Get the maximum no.
- getMaxVirtualMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMD5Hash(String) - Static method in class org.apache.hadoop.contrib.failmon.Anonymizer
-
Create the MD5 digest of an input text.
- getMemoryCalculatorPlugin(Class<? extends MemoryCalculatorPlugin>, Configuration) - Static method in class org.apache.hadoop.util.MemoryCalculatorPlugin
-
Get the MemoryCalculatorPlugin from the class name and configure it.
- getMemoryForMapTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a map task of the job, in MB.
- getMemoryForReduceTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a reduce task of the job, in MB.
- getMergeThrowable() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- getMergeThrowable() - Method in interface org.apache.hadoop.mapred.ShuffleConsumerPlugin
-
To get any exception from merge.
- getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMinSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the minimum split size
- getName() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- getName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- getName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- getName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the internal name of the group
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getNamedOutputFormatClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the named output OutputFormat.
- getNamedOutputKeyClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the key class for a named output.
- getNamedOutputs() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns iterator with the defined name outputs.
- getNamedOutputsList(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns list of channel names.
- getNamedOutputValueClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the value class for a named output.
- getNewJobId() - Method in class org.apache.hadoop.mapred.JobTracker
-
Allocates a new JobId string.
- getNewJobId() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getNext() - Method in class org.apache.hadoop.contrib.failmon.LogParser
-
Continue parsing the log file until a valid log entry is identified.
- getNextHeartbeatInterval() - Method in class org.apache.hadoop.mapred.JobTracker
-
Calculates next heartbeat interval using cluster size.
- getNextRecordRange() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get the next record range which is going to be processed by Task.
- getNode(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
Return the Node in the network topology that corresponds to the hostname
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
-
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNodesAtMaxLevel() - Method in class org.apache.hadoop.mapred.JobTracker
-
Returns a collection of nodes at the max level
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.NumToken
-
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNumberColumns() - Method in class org.apache.hadoop.examples.dancing.DancingLinks
-
Get the number of columns.
- getNumberOfFailedMachines() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the number of machines where this task has failed.
- getNumberOfThreads(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
The number of threads in the thread pool that will run the map function.
- getNumberOfUniqueHosts() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumExcludedNodes() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of excluded hosts in the cluster.
- getNumLinesPerSplit(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Get the number of lines per split
- getNumMaps() - Method in class org.apache.hadoop.mapred.ReduceTask
-
- getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumOfKeyFields() - Method in class org.apache.hadoop.streaming.PipeMapper
-
- getNumOfKeyFields() - Method in class org.apache.hadoop.streaming.PipeMapRed
-
Returns the number of key fields.
- getNumOfKeyFields() - Method in class org.apache.hadoop.streaming.PipeReducer
-
- getNumPaths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns the number of Paths in the split
- getNumPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the number of Paths in the split
- getNumProcessors() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the number of processors on this TaskTracker
Will return UNAVAILABLE if it cannot be obtained
- getNumProcessors() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the total number of processors present on the system.
- getNumProcessors() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the total number of processors present on the system.
- getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumReduceTasks() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get configured the number of reduce tasks for this job.
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get configured the number of reduce tasks for this job.
- getNumReservedTaskTrackersForMaps() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumReservedTaskTrackersForReduces() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumResolvedTaskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumSchedulingOpportunities() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumSlots() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getNumSlotsPerTask(TaskType) - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumSlotsRequired() - Method in class org.apache.hadoop.mapred.Task
-
- getNumTaskCacheLevels() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumTasksToExecutePerJvm() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the number of tasks that a spawned JVM should execute
- getOccupiedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of occupied map slots in the cluster.
- getOccupiedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of occupied reduce slots in the cluster.
- getOffset(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns the start offset of the ith Path
- getOffset(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the start offset of the ith Path
- getOp() - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
Get the type of the operation.
- getOp() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Get the type of operation.
- getOutputCommitter() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Get the output committer for this output format.
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
- getOutputCommitter() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getOutputCompressionType(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Get the SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressionType(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
Get the SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the CompressionCodec
for compressing the job outputs.
- getOutputCompressorClass(JobContext, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the CompressionCodec
for compressing the job outputs.
- getOutputFieldCount() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFile() - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return the path to local map output file created earlier
- getOutputFileForWrite(long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output file name.
- getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getOutputIndexFile() - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return the path to a local map output index file created earlier
- getOutputIndexFileForWrite(long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output index file name.
- getOutputKeyClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
Get the reduce output key class.
- getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
Returns the resolved output key class.
- getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the RawComparator
comparator used to compare keys.
- getOutputLocation() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputLocation
-
- getOutputName(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the base output name for the output file.
- getOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the Path
to the output directory for the map-reduce job.
- getOutputPath(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the Path
to the output directory for the map-reduce job.
- getOutputReaderClass() - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
- getOutputSize() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Returns the number of bytes of output from this map.
- getOutputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputValueClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
Get the reduce output value class.
- getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for job outputs.
- getOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for job outputs.
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the value class for job outputs.
- getOutputValueClass() - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
Returns the resolved output value class.
- getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined WritableComparable
comparator for
grouping keys of inputs to the reduce.
- getParentNode(Node, int) - Static method in class org.apache.hadoop.mapred.JobTracker
-
- getPartition(Shard, IntermediateForm, int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
-
- getPartition(SecondarySort.IntPair, IntWritable, int) - Method in class org.apache.hadoop.examples.SecondarySort.FirstPartitioner
-
- getPartition(IntWritable, NullWritable, int) - Method in class org.apache.hadoop.examples.SleepJob
-
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
Use Object.hashCode()
to partition.
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- getPartition(int, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- getPartition(K2, V2, int) - Method in interface org.apache.hadoop.mapred.Partitioner
-
Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
- getPartition() - Method in class org.apache.hadoop.mapred.Task
-
Get the index of this task within the job.
- getPartition(BinaryComparable, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Use (the specified slice of the array returned by)
BinaryComparable.getBytes()
to partition.
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
-
Use Object.hashCode()
to partition.
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getPartition(KEY, VALUE, int) - Method in class org.apache.hadoop.mapreduce.Partitioner
-
Get the partition number for a given key (hence record) given the total
number of partitions i.e.
- getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getPartitionerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getPartitionFile(JobConf) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
Get the path to the SequenceFile storing the sorted partition keyset.
- getPartitionFile(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Get the path to the SequenceFile storing the sorted partition keyset.
- getPath() - Method in class org.apache.hadoop.mapred.FileSplit
-
The file containing this split's data.
- getPath(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns the ith Path
- getPath(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the ith Path
- getPath() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The file containing this split's data.
- getPathForCleanup() - Method in class org.apache.hadoop.mapred.CleanupQueue.PathDeletionContext
-
- getPathForCustomFile(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a Path
for a file that is unique for
the task within the job output directory.
- getPathForWorkFile(TaskInputOutputContext<?, ?, ?, ?>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Helper function to generate a Path
for a file that is unique for
the task within the job output directory.
- getPaths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns all the Paths in the split
- getPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths in the split
- getPhase() - Method in class org.apache.hadoop.mapred.Task
-
Return current phase of the task.
- getPhase() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get current phase of this task.
- getPhysicalMemorySize() - Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
-
Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() - Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
-
Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
-
Obtain the physical memory size used by current process tree.
- getPolicyProvider() - Method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- getPos() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Unsupported (returns zero in all cases).
- getPos() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request position from proxied RR.
- getPos() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return the amount of data processed
- getPos() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Returns the current position in the input.
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- getPos() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
Returns the current position in the input.
- getPriority() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getPrivateDistributedCacheDir(String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get the complete list of the problems reported.
- getProblems() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get the complete list of the problems reported.
- getProcessTree() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get the process-tree with latest state.
- getProcessTreeDump() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Get a dump of the process-tree.
- getProcResourceValues() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
- getProcResourceValues() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain resource status used by current process tree.
- getProfile() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getProfileEnabled() - Method in class org.apache.hadoop.mapred.JobConf
-
Get whether the task profiling is enabled.
- getProfileEnabled() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether the task profiling is enabled.
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get whether the task profiling is enabled.
- getProfileParams() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the profiler configuration arguments.
- getProfileParams() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the profiler configuration arguments.
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Get the range of maps or reduces to profile.
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the range of maps or reduces to profile.
- getProgress() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- getProgress() - Method in class org.apache.hadoop.examples.MultiFileWordCount.CombineFileLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Report progress as the minimum of all child RR progress.
- getProgress() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request progress from proxied RR.
- getProgress() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
Get the progress within the split
- getProgress() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getProgress() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the Progress object; this has a float (0.0 - 1.0)
indicating the bytes processed by the iterator so far
- getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapred.Task
-
- getProgress() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the overall progress (from 0 to 1.0) for this TIP
- getProgress() - Method in class org.apache.hadoop.mapred.TaskReport
-
The amount completed, between zero and one.
- getProgress() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
Get the progress within the split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- getProgressible() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the progress mechanism for reporting progress.
- getProgressible() - Method in class org.apache.hadoop.mapred.JobContextImpl
-
Deprecated.
Get the progress mechanism for reporting progress.
- getProgressible() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
Deprecated.
- getProgressible() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
- getProperty(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Fetches the value of a property from the configuration file.
- getProtocolAddress(Configuration) - Method in class org.apache.hadoop.mapred.tools.GetGroups
-
- getProtocolAddress(Configuration) - Method in class org.apache.hadoop.mr1tools.GetGroupsBase
-
- getProtocolSignature(String, long, int) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getProtocolSignature(String, long, int) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getProtocolSignature(String, long, int) - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getProxy() - Method in class org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider
-
Lazily initialize the RPC proxy object.
- getProxy() - Method in class org.apache.hadoop.mapred.JobTrackerProxies.ProxyAndInfo
-
- getPublicDistributedCacheDir() - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the Queue ACLs for current user
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getQueueAdmins(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueAdmins(String) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the queue information associated to a particular Job Queue
- getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueInfo(String) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getQueueInfoJson() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueInfoJson() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getQueueManager() - Method in class org.apache.hadoop.mapred.JobTracker
-
Return the QueueManager
associated with the JobTracker.
- getQueueName() - Method in class org.apache.hadoop.mapred.JobConf
-
Return the name of the queue to which this job is submitted.
- getQueueName() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the name of the queue to which the job is submitted.
- getQueueName() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Get the queue name from JobQueueInfo
- getQueues() - Method in class org.apache.hadoop.mapred.JobClient
-
Return an array of queue information objects about all the Job Queues
configured.
- getQueues() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueues() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getQueueState() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Return the queue state
- getReader() - Method in class org.apache.hadoop.contrib.failmon.LogParser
-
Return the BufferedReader, that reads the log file
- getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Open the output generated by this format.
- getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Open the output generated by this format.
- getReaders(Path, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Open the output generated by this format.
- getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getReadyJobsList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.contrib.index.example.LineDocInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
This is not implemented yet.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
Deprecated.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
-
Create a record reader for the given split
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.streaming.AutoInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.streaming.StreamInputFormat
-
- getRecordReaderQueue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return sorted list of RecordReaders for this composite.
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Create a composite record writer that can write key/value data to different
output files
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
- getRecoveryDuration() - Method in class org.apache.hadoop.mapred.JobTracker
-
How long the jobtracker took to recover from restart.
- getReduceCounters() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Returns map phase counters by summing over all map tasks in progress.
- getReduceDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reduce task's debug Script
- getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getReducerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getReducerContext(ReduceContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer
-
A a wrapped Reducer.Context
for custom implementations.
- getReducerMaxSkipGroups(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- getReduceSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of reduce slots in the cluster.
- getReduceSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for reduce tasks?
Defaults to true
.
- getReduceTask() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the reduce tasks of a job.
- getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running reduce tasks in the cluster.
- getReport() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
log the counters
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getReport() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
-
- getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getReportDetails() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getReporter() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getReporter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReportItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getReservedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of reserved map slots in the cluster.
- getReservedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of reserved reduce slots in the cluster.
- getResourceCalculatorPlugin(Class<? extends ResourceCalculatorPlugin>, Configuration) - Static method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Get the ResourceCalculatorPlugin from the class name and configure it.
- getResourceStatus() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getResult() - Method in class org.apache.hadoop.examples.Sort
-
Get the last job that was run using this instance.
- getRetainHours() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
-
Get the number of hours for which job logs should be retained.
- getRotations() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getRpcAddressToBindTo() - Method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- getRpcPort() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getRpcPort() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getRpcTimeout(Configuration) - Static method in class org.apache.hadoop.mapred.JobClient
-
Returns the rpc timeout to use according to the configuration.
- getRunAsUser(JobConf) - Method in class org.apache.hadoop.mapred.TaskController
-
Returns the local unix user that a given job will run as.
- getRunningJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getRunningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
-
Version that is called from a timer thread, and therefore needs to be
careful to synchronize.
- getRunningMaps() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running map tasks in the cluster.
- getRunningReduces() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running reduce tasks in the cluster.
- getRunningTaskAttempts() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the running task attempt IDs for this task
- getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getRunState() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
-
For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
-
Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, JobConf) - Method in interface org.apache.hadoop.mapred.lib.InputSampler.Sampler
-
For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
-
From each split sampled, take the first numSamples / numSplits records.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.IntervalSampler
-
For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.RandomSampler
-
Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, Job) - Method in interface org.apache.hadoop.mapreduce.lib.partition.InputSampler.Sampler
-
For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.SplitSampler
-
From each split sampled, take the first numSamples / numSplits records.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getSchedulingInfo() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Gets the scheduling information associated to particular job queue.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapred.JobStatus
-
Gets the Scheduling information associated to a particular Job.
- getScopeInsideParentNode() - Method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- getSecond() - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
- getSecretKey(Credentials, Text) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
auxiliary method to get user's secret keys..
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Returns the query for selecting the records from an Oracle DB.
- getSequenceFileOutputKeyClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Get the key class for the SequenceFile
- getSequenceFileOutputKeyClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Get the key class for the SequenceFile
- getSequenceFileOutputValueClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Get the value class for the SequenceFile
- getSequenceFileOutputValueClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Get the value class for the SequenceFile
- getSequenceWriter(TaskAttemptContext, Class<?>, Class<?>) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getServerAddress(Configuration, String, String, String) - Static method in class org.apache.hadoop.mapred.NetUtils2
-
Deprecated.
- getServiceAddressFromToken(Token<?>) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- getServices() - Method in class org.apache.hadoop.mapred.MapReducePolicyProvider
-
- getServiceStatus() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- getServiceStatus() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceProtocol
-
- getSessionId() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-specified session identifier.
- getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the setup tasks of a job.
- getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getShape(boolean, int) - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get shuffle finish time for the task.
- getSize() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- getSize() - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- getSkipOutputPath(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the directory to which skipped records are written.
- getSkipRanges() - Method in class org.apache.hadoop.mapred.Task
-
Get skipRanges.
- getSortComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the RawComparator
comparator used to compare keys.
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the RawComparator
comparator used to compare keys.
- getSortFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get sort finish time for the task,.
- getSpace(int) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job?
Defaults to true
.
- getSpillFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local map spill file created earlier.
- getSpillFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map spill file name.
- getSpillIndexFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local map spill index file created earlier
- getSpillIndexFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map spill index file name.
- getSplit() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getSplitHosts(BlockLocation[], long, long, NetworkTopology) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
This function identifies and returns the hosts that contribute
most for a given split.
- getSplitIndex() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplitLocation() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getSplitLocation() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplitLocations() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the split locations
- getSplitNodes() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Gets the Node list of input split locations sorted in rack order.
- getSplits(int) - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Generate a list of prefixes to a given depth
- getSplits(JobConf, int) - Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
Deprecated.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Generate the list of files and make them into FileSplits.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplitsForFile(FileStatus, Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- getStagingAreaDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Grab the jobtracker's view of the staging directory path where
job-specific files will be placed.
- getStagingAreaDir() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getStagingAreaDir() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getStagingAreaDir() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Grab the jobtracker's view of the staging directory path where job-specific
files will be placed.
- getStagingDir(JobClient, Configuration) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Initializes the staging directory and returns the path.
- getStart() - Method in class org.apache.hadoop.mapred.FileSplit
-
The position of the first byte in the file to process.
- getStart() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getStart() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The position of the first byte in the file to process.
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getStartOffsets() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Returns an array containing the startoffsets of the files in the split
- getStartOffsets() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the start offsets of the files in the split
- getStartTime() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getStartTime() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getStartTime() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getStartTime() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return the start time
- getStartTime() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get start time of task.
- getStartTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get start time of the task.
- getState(String) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
-
Read and return the state of parsing for a particular log file.
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getState() - Method in class org.apache.hadoop.mapred.TaskReport
-
The most recent state, reported by a
Reporter
.
- getStatement() - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getStatement() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getStateString() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getStatus() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Get the last set status message.
- getStatus() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the last set status message.
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.StrToken
-
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getSuccessEventNumber() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the event number that was raised for this tip
- getSuccessfulJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getSuccessfulTaskAttempt() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the attempt ID that took this task to completion
- getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getSummaryJson() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getSummaryJson() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
Used by internal DistributedCache code.
- getSymlink() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
- getSymlink() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getSymlink() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getSymlink() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
- getSystemDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Grab the jobtracker system directory path where job-specific files are to be placed.
- getSystemDir() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getSystemDir() - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getTag() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- getTask() - Method in class org.apache.hadoop.mapred.JvmTask
-
- getTask(JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
Called upon startup by the child process, to fetch Task data.
- getTask(JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Called when a child task process starts, to get its task.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputLocation
-
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
Deprecated.
- getTaskAttemptID() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
Get the taskAttemptID.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns task id.
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Get the unique name for this task attempt.
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the unique name for this task attempt.
- getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Deprecated.
- getTaskAttemptLogDir(TaskAttemptID, String, String[]) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get attempt log directory path for the given attempt-id under randomly
selected mapred local directory.
- getTaskAttempts() - Method in class org.apache.hadoop.mapred.JobHistory.Task
-
Returns all task attempts for this task.
- getTaskCompletionEvents(int, int) - Method in class org.apache.hadoop.mapred.JobInProgress
-
- getTaskCompletionEvents(JobID, int, int) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getTaskCompletionEvents(JobID, int, int) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Get events indicating completion (success/failure) of component tasks.
- getTaskController() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskController() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
-
Get the taskController for deleting logs.
- getTaskDiagnostics(TaskAttemptID) - Method in class org.apache.hadoop.mapred.JobTracker
-
Get the diagnostics for a given task
- getTaskDiagnostics(TaskAttemptID) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
Returns the diagnostic information for a particular task in the given job.
- getTaskDiagnostics(TaskAttemptID) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the diagnostic messages for a given task attempt.
- getTaskDistributedCacheManager(JobID) - Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
- getTaskId() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputLocation
-
- getTaskID() - Method in class org.apache.hadoop.mapred.Task
-
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Getter/Setter methods for log4j.
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
-
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskReport
-
The id of the task.
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
TaskID
object that this task attempt belongs to
- getTaskIDsPattern(String, Integer, Boolean, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
-
Deprecated.
- getTaskInfo(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getTaskInProgress(TaskID) - Method in class org.apache.hadoop.mapred.JobInProgress
-
Return the TaskInProgress that matches the tipid.
- getTaskLogFile(TaskAttemptID, boolean, TaskLog.LogName) - Static method in class org.apache.hadoop.mapred.TaskLog
-
- getTaskLogLength(JobConf) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get the desired maximum length of task's logs.
- getTaskLogsUrl(JobHistory.TaskAttempt) - Static method in class org.apache.hadoop.mapred.JobHistory
-
Return the TaskLogsUrl of a particular TaskAttempt
- getTaskLogUrl(String, String, String) - Static method in class org.apache.hadoop.mapred.TaskLogServlet
-
Construct the taskLogUrl
- getTaskLogUrl(TaskTrackerStatus, String) - Static method in class org.apache.hadoop.mapred.TaskLogServlet
-
Construct the taskLogUrl
- getTaskMemoryManager() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
-
Get the task output filter out of the JobConf.
- getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
- getTaskOutputPath(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to create the task's temporary output directory and
return the path to the task's output file.
- getTaskReports() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
Get the current tasks at the TaskTracker.
- getTaskRunTime() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns time (in millisec) the task took to complete.
- getTasksInfoJson() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTasksInfoJson() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns enum Status.SUCESS or Status.FAILURE.
- getTaskStatus(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the status of the specified task
- getTaskStatuses() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Get the Status of the tasks managed by this TIP
- getTaskToRun(String) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return a Task that can be sent to a TaskTracker for execution.
- getTaskTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getTaskTracker() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of active trackers in the cluster.
- getTaskTrackerHttp() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
http location of the tasktracker where this task ran.
- getTaskTrackerInstrumentation() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskTrackerReportAddress() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Return the port at which the tasktracker bound to
- getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of task trackers in the cluster.
- getTaskTrackerStatus(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the TaskType of the TaskAttemptID
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Get the type of the task
- getTerm() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Get the term.
- getText() - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
Get the text that represents a document.
- getText() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
The text of the document id.
- getThreadCount() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getThreadCount() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getThreadState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getTimestamp(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Returns mtime of a given cache file on hdfs.
- getTip(TaskID) - Method in class org.apache.hadoop.mapred.JobTracker
-
Returns specified TaskInProgress, or null.
- getTIPId() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return an ID for this task, not its component taskid-threads
- getTotalJobSubmissions() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of job submissions in the cluster.
- getTotalLogFileSize() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- getTotalPhysicalMemory() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the maximum amount of physical memory on the tasktracker.
- getTotalSubmissions() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getTotalVirtualMemory() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Get the maximum amount of virtual memory on the tasktracker.
- getTrackerIdentifier() - Method in class org.apache.hadoop.mapred.JobTracker
-
Get the unique identifier (ie.
- getTrackerName() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getTrackerName() - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
- getTrackerPort() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the URL where some job progress information will be displayed.
- getTrackingURL() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the URL where some job progress information will be displayed.
- getTTExpiryInterval() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the tasktracker expiry interval for the cluster
- getType() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getType() - Method in class org.apache.hadoop.typedbytes.TypedBytesWritable
-
Get the type code embedded in the first byte.
- getUgmProtocol() - Method in class org.apache.hadoop.mr1tools.GetGroupsBase
-
- getUmbilical() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- getUnderlyingCounter() - Method in interface org.apache.hadoop.mapreduce.Counter
-
Return the underlying object if this is a facade.
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getUnderlyingGroup() - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- getUnderlyingGroup() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Exposes the underlying group type if a facade.
- getUniqueFile(TaskAttemptContext, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Generate a unique filename, based on the task id, name, and extension
- getUniqueItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getUniqueName(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a name that is unique for the task.
- getUpperClause() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getURIs(String, String) - Method in class org.apache.hadoop.streaming.StreamJob
-
get the uris of all the files/caches
- getURL() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the link to the web-ui for details of the job.
- getUrlScheme() - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getUsageString() - Method in class org.apache.hadoop.mapred.tools.MRHAAdmin
-
- getUsedMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getUseNewMapper() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the mapper?
- getUseNewReducer() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the reducer?
- getUser() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reported username for this job.
- getUser() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Get the user for the job
- getUser() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the user id.
- getUser() - Method in class org.apache.hadoop.mapred.Task
-
Get the name of the user running the job/task.
- getUser() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the reported username for this job.
- getUser() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getUser() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getUser() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- getUser() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the reported username for this job.
- getUserDir(String) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getUserLogCleaner() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
-
- getUserLogDir() - Static method in class org.apache.hadoop.mapred.TaskLog
-
- GetUserMappingsProtocol - Interface in org.apache.hadoop.mr1tools
-
Protocol implemented by the Name Node and Job Tracker which maps users to
groups.
- getUserName(JobConf) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
-
Get the user name from the job conf
- getUsername() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getValue() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- getValue() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getValue() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the current raw value.
- getValue() - Method in interface org.apache.hadoop.mapreduce.Counter
-
What is the current value of this counter?
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getValue() - Method in enum org.apache.hadoop.mapreduce.JobStatus.State
-
- getValue() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getValue(String, String, String, T) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get a resource given bundle name and key
- getValue() - Method in class org.apache.hadoop.typedbytes.TypedBytesWritable
-
Get the typed bytes as a Java object.
- getValue() - Method in enum org.apache.hadoop.util.ProcessTree.Signal
-
- getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getValueClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the value class for this SequenceFile.
- getValueClassName() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the value class for this SequenceFile.
- getValues() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getValues() - Method in interface org.apache.hadoop.mapreduce.ReduceContext
-
Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getValues() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getVersion() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
Get the version number of the entire index.
- getVersion() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getVersion() - Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getVersion() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- getVersion() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getVirtualMemorySize() - Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
-
Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() - Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() - Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
-
Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
-
Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() - Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
-
Obtain the virtual memory size used by a current process tree.
- getVIVersion() - Method in class org.apache.hadoop.mapred.JobTracker
-
- getWaitingJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Deprecated.
- getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the current working directory for the default file system.
- getWorkingDirectory() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the current working directory for the default file system.
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the current working directory for the default file system.
- getWorkOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkOutputPath(TaskInputOutputContext<?, ?, ?, ?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkPath() - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Get the directory that the task should write results into
- getWriteAllCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Get the "writeAllCounters" option
- getZKFCAddress() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- getZkfcPort(Configuration) - Static method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- go() - Method in class org.apache.hadoop.streaming.StreamJob
-
- goodClassOrNull(Configuration, String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
It may seem strange to silently switch behaviour when a String
is not a classname; the reason is simplified Usage:
- Grep - Class in org.apache.hadoop.examples
-
- GROUP - Static variable in class org.apache.hadoop.mapreduce.lib.map.RegexMapper
-
- ID - Class in org.apache.hadoop.mapred
-
A general identifier, which internally stores the id
as an integer.
- ID(int) - Constructor for class org.apache.hadoop.mapred.ID
-
constructs an ID object from the given int
- ID() - Constructor for class org.apache.hadoop.mapred.ID
-
- id() - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
-
Return the position in the collector this class occupies.
- id() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return the position in the collector this class occupies.
- id - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- id() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Return the position in the collector this class occupies.
- ID - Class in org.apache.hadoop.mapreduce
-
A general identifier, which internally stores the id
as an integer.
- ID(int) - Constructor for class org.apache.hadoop.mapreduce.ID
-
constructs an ID object from the given int
- ID() - Constructor for class org.apache.hadoop.mapreduce.ID
-
- id - Variable in class org.apache.hadoop.mapreduce.ID
-
- ident - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- IdentifierResolver - Class in org.apache.hadoop.streaming.io
-
This class is used to resolve a string identifier into the required IO
classes.
- IdentifierResolver() - Constructor for class org.apache.hadoop.streaming.io.IdentifierResolver
-
- IdentityLocalAnalysis - Class in org.apache.hadoop.contrib.index.example
-
Identity local analysis maps inputs directly into outputs.
- IdentityLocalAnalysis() - Constructor for class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
-
- IdentityMapper<K,V> - Class in org.apache.hadoop.mapred.lib
-
Implements the identity function, mapping inputs directly to outputs.
- IdentityMapper() - Constructor for class org.apache.hadoop.mapred.lib.IdentityMapper
-
- IdentityReducer<K,V> - Class in org.apache.hadoop.mapred.lib
-
Performs no reduction, writing all input values directly to the output.
- IdentityReducer() - Constructor for class org.apache.hadoop.mapred.lib.IdentityReducer
-
- idFormat - Static variable in class org.apache.hadoop.mapreduce.JobID
-
- idFormat - Static variable in class org.apache.hadoop.mapreduce.TaskID
-
- IDistributionPolicy - Interface in org.apache.hadoop.contrib.index.mapred
-
A distribution policy decides, given a document with a document id, which
one shard the request should be sent to if the request is an insert, and
which shard(s) the request should be sent to if the request is a delete.
- idWithinJob() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- idWithinJob() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Return the index of the tip within the job, so
"task_200707121733_1313_0002_m_012345" would return 12345;
- idx - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- idx - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- ifmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- IIndexUpdater - Interface in org.apache.hadoop.contrib.index.mapred
-
A class implements an index updater interface should create a Map/Reduce job
configuration and run the Map/Reduce job to analyze documents and update
Lucene instances in parallel.
- ILocalAnalysis<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Interface in org.apache.hadoop.contrib.index.mapred
-
Application specific local analysis.
- incompleteSubTask(TaskAttemptID, JobStatus) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Indicate that one of the taskids in this TaskInProgress
has failed.
- incrAllCounters(CounterGroupBase<Counters.Counter>) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- incrAllCounters(Counters) - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
Increments multiple counters by their amounts in another Counters
instance.
- incrAllCounters(CounterGroupBase<T>) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- incrAllCounters(AbstractCounters<C, G>) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Increments multiple counters by their amounts in another Counters
instance.
- incrAllCounters(CounterGroupBase<T>) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Increment all counters by a group of counters
- incrAllCounters(CounterGroupBase<C>) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- incrAllCounters(CounterGroupBase<C>) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- incrCounter(Enum<?>, long) - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
Increments the specified counter by the specified amount, creating it if
it didn't already exist.
- incrCounter(String, String, long) - Method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
Increments the specified counter by the specified amount, creating it if
it didn't already exist.
- incrCounter(Enum<?>, long) - Method in interface org.apache.hadoop.mapred.Reporter
-
Increments the counter identified by the key, which can be of
any Enum
type, by the specified amount.
- incrCounter(String, String, long) - Method in interface org.apache.hadoop.mapred.Reporter
-
Increments the counter identified by the group and counter name
by the specified amount.
- incrCounter(Enum, long) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- incrCounter(String, String, long) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- incrCounters() - Method in class org.apache.hadoop.mapreduce.counters.Limits
-
- increment(long) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- increment(long) - Method in interface org.apache.hadoop.mapreduce.Counter
-
Increment this counter by the given value
- increment(long) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- increment(long) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- increment(long) - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- IndexRecord - Class in org.apache.hadoop.mapred
-
- IndexRecord() - Constructor for class org.apache.hadoop.mapred.IndexRecord
-
- IndexRecord(long, long, long) - Constructor for class org.apache.hadoop.mapred.IndexRecord
-
- IndexUpdateCombiner - Class in org.apache.hadoop.contrib.index.mapred
-
This combiner combines multiple intermediate forms into one intermediate
form.
- IndexUpdateCombiner() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
-
- IndexUpdateConfiguration - Class in org.apache.hadoop.contrib.index.mapred
-
This class provides the getters and the setters to a number of parameters.
- IndexUpdateConfiguration(Configuration) - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Constructor
- IndexUpdateMapper<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.contrib.index.mapred
-
This class applies local analysis on a key-value pair and then convert the
result docid-operation pair to a shard-and-intermediate form pair.
- IndexUpdateMapper() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
- IndexUpdateOutputFormat - Class in org.apache.hadoop.contrib.index.mapred
-
The record writer of this output format simply puts a message in an output
path when a shard update is done.
- IndexUpdateOutputFormat() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
-
- IndexUpdatePartitioner - Class in org.apache.hadoop.contrib.index.mapred
-
This partitioner class puts the values of the same key - in this case the
same shard - in the same partition.
- IndexUpdatePartitioner() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
-
- IndexUpdater - Class in org.apache.hadoop.contrib.index.mapred
-
An implementation of an index updater interface which creates a Map/Reduce
job configuration and run the Map/Reduce job to analyze documents and update
Lucene instances in parallel.
- IndexUpdater() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdater
-
- IndexUpdateReducer - Class in org.apache.hadoop.contrib.index.mapred
-
This reducer applies to a shard the changes for it.
- IndexUpdateReducer() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
- init(Shard[]) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
-
- init(Shard[]) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
-
- init(Shard[]) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
-
Initialization.
- init(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
-
- init(JobTracker, JobConf, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory
-
Initialize JobHistory files.
- init() - Method in class org.apache.hadoop.mapred.JobTrackerHAHttpRedirector.RedirectorServlet
-
- init(MapOutputCollector.Context) - Method in interface org.apache.hadoop.mapred.MapOutputCollector
-
- init(MapOutputCollector.Context) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
- init(ShuffleConsumerPlugin.Context) - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- init(ShuffleConsumerPlugin.Context) - Method in interface org.apache.hadoop.mapred.ShuffleConsumerPlugin
-
To initialize the reduce copier plugin.
- init() - Method in class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
-
- init(Configuration) - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- init() - Method in class org.apache.hadoop.streaming.StreamJob
-
- init() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- inited() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Check if the job has been initialized.
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.examples.MultiFileWordCount.CombineFileLineRecordReader
-
- initialize(TaskTracker) - Method in interface org.apache.hadoop.mapred.ShuffleProviderPlugin
-
Do constructor work here.
- initialize(JobConf, JobID, Reporter, boolean) - Method in class org.apache.hadoop.mapred.Task
-
- initialize(TaskTracker) - Method in class org.apache.hadoop.mapred.TaskTracker.DefaultShuffleProvider
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Called once at initialization.
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.InputWriter
-
Initializes the InputWriter.
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.OutputReader
-
Initializes the OutputReader.
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.RawBytesInputWriter
-
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.TextInputWriter
-
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.TypedBytesInputWriter
-
- initialize(PipeMapRed) - Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- initializeAttemptDirs(String, String, String) - Method in class org.apache.hadoop.mapreduce.server.tasktracker.Localizer
-
Create taskDirs on all the disks.
- initializeJob(String, String, Path, Path, TaskUmbilicalProtocol, InetSocketAddress) - Method in class org.apache.hadoop.mapred.DefaultTaskController
-
This routine initializes the local file system for running a job.
- initializeJob(String, String, Path, Path, TaskUmbilicalProtocol, InetSocketAddress) - Method in class org.apache.hadoop.mapred.TaskController
-
Create all of the directories necessary for the job to start and download
all of the job and private distributed cache files.
- initializeJobDirs(String, JobID) - Method in class org.apache.hadoop.mapreduce.server.tasktracker.Localizer
-
Prepare the job directories for a given job.
- initializeJobLogDir() - Method in class org.apache.hadoop.mapred.JobLocalizer
-
Create job log directory and set appropriate permissions for the directory.
- initializeJobLogDir(JobID) - Method in class org.apache.hadoop.mapreduce.server.tasktracker.Localizer
-
Create job log directory and set appropriate permissions for the directory.
- initializePieces() - Method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
-
Define the one sided pieces.
- initializePieces() - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Fill in the pieces list.
- initializeUserDirs(String) - Method in class org.apache.hadoop.mapreduce.server.tasktracker.Localizer
-
Initialize the local directories for a particular user on this TT.
- initJob(JobInProgress) - Method in class org.apache.hadoop.mapred.JobTracker
-
- initMerger() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- initNextRecordReader() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
Get the record reader for the next chunk in this CombineFileSplit.
- initNextRecordReader() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
Get the record reader for the next chunk in this CombineFileSplit.
- initRPC() - Method in class org.apache.hadoop.mapred.tools.MRZKFailoverController
-
- initTasks() - Method in class org.apache.hadoop.mapred.JobInProgress
-
Construct the splits, etc.
- InnerJoinRecordReader<K extends org.apache.hadoop.io.WritableComparable> - Class in org.apache.hadoop.mapred.join
-
Full inner join.
- INPUT_BOUNDING_QUERY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Input query to get the max and min values of the jdbc.input.query
- INPUT_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Class name implementing DBWritable which will hold input tuples
- INPUT_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Class name implementing DBWritable which will hold input tuples
- INPUT_CONDITIONS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
WHERE clause in the input SELECT statement
- INPUT_CONDITIONS_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
WHERE clause in the input SELECT statement
- INPUT_COUNT_QUERY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Input query to get the count of records
- INPUT_COUNT_QUERY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Input query to get the count of records
- INPUT_FIELD_NAMES_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Field names in the Input table
- INPUT_FIELD_NAMES_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Field names in the Input table
- INPUT_FORMAT_CLASS_ATTR - Static variable in interface org.apache.hadoop.mapreduce.JobContext
-
- INPUT_ORDER_BY_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
ORDER BY clause in the input SELECT statement
- INPUT_ORDER_BY_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
ORDER BY clause in the input SELECT statement
- INPUT_QUERY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Whole input query, exluding LIMIT...OFFSET
- INPUT_QUERY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Whole input query, exluding LIMIT...OFFSET
- INPUT_TABLE_NAME_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Input table name
- INPUT_TABLE_NAME_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Input table name
- inputBytes(long) - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.ShuffleClientMetrics
-
- inputCounter - Variable in class org.apache.hadoop.mapred.Task.CombinerRunner
-
- inputFile - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- inputFile - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- InputFormat<K,V> - Interface in org.apache.hadoop.mapred
-
InputFormat
describes the input-specification for a
Map-Reduce job.
- InputFormat<K,V> - Class in org.apache.hadoop.mapreduce
-
InputFormat
describes the input-specification for a
Map-Reduce job.
- InputFormat() - Constructor for class org.apache.hadoop.mapreduce.InputFormat
-
- inputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- InputSampler<K,V> - Class in org.apache.hadoop.mapred.lib
-
- InputSampler(JobConf) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler
-
- InputSampler<K,V> - Class in org.apache.hadoop.mapreduce.lib.partition
-
- InputSampler(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler
-
- InputSampler.IntervalSampler<K,V> - Class in org.apache.hadoop.mapred.lib
-
Sample from s splits at regular intervals.
- InputSampler.IntervalSampler(double) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
-
Create a new IntervalSampler sampling all splits.
- InputSampler.IntervalSampler(double, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
-
Create a new IntervalSampler.
- InputSampler.IntervalSampler<K,V> - Class in org.apache.hadoop.mapreduce.lib.partition
-
Sample from s splits at regular intervals.
- InputSampler.IntervalSampler(double) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.IntervalSampler
-
Create a new IntervalSampler sampling all splits.
- InputSampler.IntervalSampler(double, int) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.IntervalSampler
-
Create a new IntervalSampler.
- InputSampler.RandomSampler<K,V> - Class in org.apache.hadoop.mapred.lib
-
Sample from random points in the input.
- InputSampler.RandomSampler(double, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
-
Create a new RandomSampler sampling all splits.
- InputSampler.RandomSampler(double, int, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
-
Create a new RandomSampler.
- InputSampler.RandomSampler<K,V> - Class in org.apache.hadoop.mapreduce.lib.partition
-
Sample from random points in the input.
- InputSampler.RandomSampler(double, int) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.RandomSampler
-
Create a new RandomSampler sampling all splits.
- InputSampler.RandomSampler(double, int, int) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.RandomSampler
-
Create a new RandomSampler.
- InputSampler.Sampler<K,V> - Interface in org.apache.hadoop.mapred.lib
-
- InputSampler.Sampler<K,V> - Interface in org.apache.hadoop.mapreduce.lib.partition
-
- InputSampler.SplitSampler<K,V> - Class in org.apache.hadoop.mapred.lib
-
Samples the first n records from s splits.
- InputSampler.SplitSampler(int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
-
Create a SplitSampler sampling all splits.
- InputSampler.SplitSampler(int, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
-
Create a new SplitSampler.
- InputSampler.SplitSampler<K,V> - Class in org.apache.hadoop.mapreduce.lib.partition
-
Samples the first n records from s splits.
- InputSampler.SplitSampler(int) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.SplitSampler
-
Create a SplitSampler sampling all splits.
- InputSampler.SplitSampler(int, int) - Constructor for class org.apache.hadoop.mapreduce.lib.partition.InputSampler.SplitSampler
-
Create a new SplitSampler.
- inputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- InputSplit - Interface in org.apache.hadoop.mapred
-
InputSplit
represents the data to be processed by an
individual
Mapper
.
- InputSplit - Class in org.apache.hadoop.mapreduce
-
InputSplit
represents the data to be processed by an
individual
Mapper
.
- InputSplit() - Constructor for class org.apache.hadoop.mapreduce.InputSplit
-
- inputTag - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- InputWriter<K,V> - Class in org.apache.hadoop.streaming.io
-
Abstract base for classes that write the client's input.
- InputWriter() - Constructor for class org.apache.hadoop.streaming.io.InputWriter
-
- inReaderSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- insert(EventRecord) - Method in class org.apache.hadoop.contrib.failmon.LocalStore
-
Insert an EventRecord to the local storage, after it
gets serialized and anonymized.
- insert(EventRecord[]) - Method in class org.apache.hadoop.contrib.failmon.LocalStore
-
Insert an array of EventRecords to the local storage, after they
get serialized and anonymized.
- INSERT - Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
-
- instances - Static variable in class org.apache.hadoop.contrib.failmon.Executor
-
- IntegerSplitter - Class in org.apache.hadoop.mapreduce.lib.db
-
Implement DBSplitter over integer values.
- IntegerSplitter() - Constructor for class org.apache.hadoop.mapreduce.lib.db.IntegerSplitter
-
- IntermediateForm - Class in org.apache.hadoop.contrib.index.mapred
-
An intermediate form for one or more parsed Lucene documents and/or
delete terms.
- IntermediateForm() - Constructor for class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
Constructor
- IntSumReducer<Key> - Class in org.apache.hadoop.mapreduce.lib.reduce
-
- IntSumReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer
-
- InvalidFileTypeException - Exception in org.apache.hadoop.mapred
-
Used when file type differs from the desired file type.
- InvalidFileTypeException() - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
-
- InvalidFileTypeException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
-
- InvalidInputException - Exception in org.apache.hadoop.mapred
-
This class wraps a list of problems with the input, so that the user
can get a list of problems together instead of finding and fixing them one
by one.
- InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapred.InvalidInputException
-
Create the exception with the given list.
- InvalidInputException - Exception in org.apache.hadoop.mapreduce.lib.input
-
This class wraps a list of problems with the input, so that the user
can get a list of problems together instead of finding and fixing them one
by one.
- InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Create the exception with the given list.
- InvalidJobConfException - Exception in org.apache.hadoop.mapred
-
This exception is thrown when jobconf misses some mendatory attributes
or value of some attributes is invalid.
- InvalidJobConfException() - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InvalidJobConfException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InvalidJobConfException(String, Throwable) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InvalidJobConfException(Throwable) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InverseMapper<K,V> - Class in org.apache.hadoop.mapred.lib
-
A
Mapper
that swaps keys and values.
- InverseMapper() - Constructor for class org.apache.hadoop.mapred.lib.InverseMapper
-
- InverseMapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.map
-
A
Mapper
that swaps keys and values.
- InverseMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.map.InverseMapper
-
- ioSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- isAlive(String) - Static method in class org.apache.hadoop.util.ProcessTree
-
Is the process with PID pid still alive?
This method assumes that isAlive is called on a pid that was alive not
too long ago, and hence assumes no chance of pid-wrapping-around.
- isAlive() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Is the root-process alive?
- isAnyProcessInTreeAlive() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Is any of the subprocesses in the process-tree alive?
- isAutoFailoverEnabled() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- isAvailable() - Static method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
Checks if the ProcfsBasedProcessTree is available on this system.
- isBlacklisted(String) - Method in class org.apache.hadoop.mapred.JobTracker
-
Whether the tracker is blacklisted or not
- isCommitPending(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- isComplete() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Check if the job is finished or not.
- isComplete() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Is this tip complete?
- isComplete(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Is the given taskid the one that took this tip to completion?
- isComplete() - Method in class org.apache.hadoop.mapreduce.Job
-
Check if the job is finished or not.
- isCompleted() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- isCygwin() - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- isFailed() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Is the tip a failure?
- isFirstAttempt(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Is the Task associated with taskid is the first attempt of the tip?
- isFrameworkGroup(String) - Static method in class org.apache.hadoop.mapreduce.counters.CounterGroupFactory
-
Check whether a group name is a name of a framework group (including
the filesystem group).
- isHAEnabled() - Static method in class org.apache.hadoop.mapred.HAUtil
-
Returns true if jobtracker HA is configured.
- isHAEnabled(Configuration, String) - Static method in class org.apache.hadoop.mapred.HAUtil
-
Returns true if jobtracker HA is configured.
- isHealthy() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- isHealthy() - Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- isIdle() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Is this task tracker idle?
- isJobCleanupTask() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- isJobComplete() - Method in class org.apache.hadoop.mapred.JobStatus
-
Returns true if the status is for a completed job.
- isJobDirValid(Path, FileSystem) - Static method in class org.apache.hadoop.mapred.JobClient
-
Checks if the job directory is clean and has all the required components
for (re) starting the job
- isJobSetupTask() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- isLocalHadoop() - Method in class org.apache.hadoop.streaming.StreamJob
-
- isLocalJobTracker(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- isManaged(Token<?>) - Method in class org.apache.hadoop.mapred.JobClient.Renewer
-
- isMap() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns whether this TaskAttemptID is a map ID
- isMap() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Returns whether this TaskID is a map ID
- isMapTask() - Method in class org.apache.hadoop.mapred.MapTask
-
- isMapTask() - Method in class org.apache.hadoop.mapred.ReduceTask
-
- isMapTask() - Method in class org.apache.hadoop.mapred.Task
-
- isMapTask() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- isMapTask() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Whether this is a map task
- isMR2() - Static method in class org.apache.hadoop.mapred.MRVersion
-
- isMultiNamedOutput(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns if a named output is multiple.
- IsolationRunner - Class in org.apache.hadoop.mapred
-
IsolationRunner is intended to facilitate debugging by re-running a specific
task, given left-over task files for a (typically failed) past job.
- IsolationRunner() - Constructor for class org.apache.hadoop.mapred.IsolationRunner
-
- isOnlyCommitPending() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- isProcessGroupAlive(String) - Static method in class org.apache.hadoop.util.ProcessTree
-
Is the process group with still alive?
This method assumes that isAlive is called on a pid that was alive not
too long ago, and hence assumes no chance of pid-wrapping-around.
- isQueueEmpty() - Method in class org.apache.hadoop.mapred.CleanupQueue
-
- isReady() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- isRunning() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Is this tip currently running any tasks?
- isSegmentsFile(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
Check if the file is a segments_N file
- isSegmentsGenFile(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
Check if the file is the segments.gen file
- isSetsidAvailable - Static variable in class org.apache.hadoop.util.ProcessTree
-
- isSkipping() - Method in class org.apache.hadoop.mapred.Task
-
Is Task in skipping mode.
- isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
Is the given filename splitable? Usually, true, but if the file is
stream compressed, it will not be.
- isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Is the given filename splitable? Usually, true, but if the file is
stream compressed, it will not be.
- isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
-
- isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
- isSuccessful() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Check if the job completed successfully.
- isSuccessful() - Method in class org.apache.hadoop.mapreduce.Job
-
Check if the job completed successfully.
- isTaskMemoryManagerEnabled() - Method in class org.apache.hadoop.mapred.TaskTracker
-
Is the TaskMemoryManager Enabled on this system?
- isTokenForLogicalAddress(Token<?>) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- isTruncaterJvm() - Static method in class org.apache.hadoop.mapred.TaskLogsTruncater
-
Return true if the current JVM is for truncation only.
- isValid() - Method in class org.apache.hadoop.contrib.failmon.EventRecord
-
Check if the EventRecord is a valid one, i.e., whether
it represents meaningful metric values.
- isValid() - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
-
Check if the SerializedRecord is a valid one, i.e., whether
it represents meaningful metric values.
- iterator() - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- iterator() - Method in class org.apache.hadoop.mapred.join.TupleWritable
-
Return an iterator over the elements in this tuple.
- iterator() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- iterator() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
- iterator() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- iterator() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- iterator() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl.ValueIterable
-
- safeGetCanonicalPath(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
-
- scheduleMap(TaskInProgress) - Method in class org.apache.hadoop.mapred.JobInProgress
-
Adds a map tip to the list of running maps.
- scheduleOffSwitch(int) - Method in class org.apache.hadoop.mapred.JobInProgress
-
Check if we can schedule an off-switch task for this job.
- scheduleReduce(TaskInProgress) - Method in class org.apache.hadoop.mapred.JobInProgress
-
Adds a reduce tip to the list of running reduces
- scheduleReduces() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- schedulingOpportunity() - Method in class org.apache.hadoop.mapred.JobInProgress
-
- SCHEME - Static variable in class org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal
-
- SecondarySort - Class in org.apache.hadoop.examples
-
This is an example Hadoop Map/Reduce application.
- SecondarySort() - Constructor for class org.apache.hadoop.examples.SecondarySort
-
- SecondarySort.FirstGroupingComparator - Class in org.apache.hadoop.examples
-
Compare only the first part of the pair, so that reduce is called once
for each value of the first part.
- SecondarySort.FirstGroupingComparator() - Constructor for class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
-
- SecondarySort.FirstPartitioner - Class in org.apache.hadoop.examples
-
Partition based on the first part of the pair.
- SecondarySort.FirstPartitioner() - Constructor for class org.apache.hadoop.examples.SecondarySort.FirstPartitioner
-
- SecondarySort.IntPair - Class in org.apache.hadoop.examples
-
Define a pair of integers that are writable.
- SecondarySort.IntPair() - Constructor for class org.apache.hadoop.examples.SecondarySort.IntPair
-
- SecondarySort.IntPair.Comparator - Class in org.apache.hadoop.examples
-
A Comparator that compares serialized IntPair.
- SecondarySort.IntPair.Comparator() - Constructor for class org.apache.hadoop.examples.SecondarySort.IntPair.Comparator
-
- SecondarySort.MapClass - Class in org.apache.hadoop.examples
-
Read two integers from each line and generate a key, value pair
as ((left, right), right).
- SecondarySort.MapClass() - Constructor for class org.apache.hadoop.examples.SecondarySort.MapClass
-
- SecondarySort.Reduce - Class in org.apache.hadoop.examples
-
A reducer class that just emits the sum of the input values.
- SecondarySort.Reduce() - Constructor for class org.apache.hadoop.examples.SecondarySort.Reduce
-
- SecureShuffleUtils - Class in org.apache.hadoop.mapreduce.security
-
utilities for generating kyes, hashes and verifying them for shuffle
- SecureShuffleUtils() - Constructor for class org.apache.hadoop.mapreduce.security.SecureShuffleUtils
-
- seek(long) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
Implementation should seek forward in_ to the first byte of the next record.
- seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- seenPrimary_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- selectToken(Text, Collection<Token<? extends TokenIdentifier>>) - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSelector
-
- SensorsParser - Class in org.apache.hadoop.contrib.failmon
-
Objects of this class parse the output of the lm-sensors utility
to gather information about fan speed, temperatures for cpus
and motherboard etc.
- SensorsParser() - Constructor for class org.apache.hadoop.contrib.failmon.SensorsParser
-
- SEPARATOR - Static variable in class org.apache.hadoop.mapreduce.ID
-
- SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapred
-
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
- SequenceFileAsBinaryInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
- SequenceFileAsBinaryInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapred
-
Read records from a SequenceFile as binary (raw) bytes.
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapreduce.lib.input
-
Read records from a SequenceFile as binary (raw) bytes.
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapred
-
An
OutputFormat
that writes keys, values to
SequenceFile
s in binary(raw) format
- SequenceFileAsBinaryOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapreduce.lib.output
-
An
OutputFormat
that writes keys,
values to
SequenceFile
s in binary(raw) format
- SequenceFileAsBinaryOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapred
-
Inner class used for appendRaw
- SequenceFileAsBinaryOutputFormat.WritableValueBytes() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapreduce.lib.output
-
Inner class used for appendRaw
- SequenceFileAsBinaryOutputFormat.WritableValueBytes() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
-
This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader
which converts the input keys and values to their String forms by calling toString() method.
- SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
This class is similar to SequenceFileInputFormat, except it generates
SequenceFileAsTextRecordReader which converts the input keys and values
to their String forms by calling toString() method.
- SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
-
This class converts the input keys and values to their String forms by calling toString()
method.
- SequenceFileAsTextRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapreduce.lib.input
-
This class converts the input keys and values to their String forms by
calling toString() method.
- SequenceFileAsTextRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapred
-
A class that allows a map/red job to work on a sample of sequence files.
- SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
-
- SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A class that allows a map/red job to work on a sample of sequence files.
- SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
- SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
-
filter interface
- SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapreduce.lib.input
-
filter interface
- SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
-
base class for Filters
- SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
-
- SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapreduce.lib.input
-
base class for Filters
- SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.FilterBase
-
- SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
-
This class returns a set of records by examing the MD5 digest of its
key against a filtering frequency f.
- SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
- SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapreduce.lib.input
-
This class returns a set of records by examing the MD5 digest of its
key against a filtering frequency f.
- SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
- SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
-
This class returns a percentage of records
The percentage is determined by a filtering frequency f using
the criteria record# % f == 0.
- SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
- SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapreduce.lib.input
-
This class returns a percentage of records
The percentage is determined by a filtering frequency f using
the criteria record# % f == 0.
- SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
- SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
-
Records filter by matching key to regex
- SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
- SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapreduce.lib.input
-
Records filter by matching key to regex
- SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
- SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
-
- SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- SequenceFileRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- SerializedRecord - Class in org.apache.hadoop.contrib.failmon
-
Objects of this class hold the serialized representations
of EventRecords.
- SerializedRecord(EventRecord) - Constructor for class org.apache.hadoop.contrib.failmon.SerializedRecord
-
Create the SerializedRecord given an EventRecord.
- SESSION_TIMEZONE_KEY - Static variable in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Configuration key to set to a timezone string.
- set(String, Object) - Method in class org.apache.hadoop.contrib.failmon.EventRecord
-
Set the value of a property of the EventRecord.
- set(String, String) - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
-
Set the value of a property of the EventRecord.
- set(int, int) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
Set the left and right values.
- setAggregatorDescriptors(JobConf, Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- setArchiveSizes(JobID, long[]) - Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
Set the sizes for any archives, files, or directories in the private
distributed cache.
- setArchiveTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
This is to check the timestamp of the archives to be localized.
- setAssignedJobID(JobID) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
setAssignedJobID should not be called.
JOBID is set by the framework.
- setAttemptsToStartSkipping(Configuration, int) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of Task attempts AFTER which skip mode
will be kicked off.
- setAutoIncrMapperProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- setAutoIncrReducerProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- setBoundingQuery(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Set the user-defined bounding query to use with a user-defined query.
- setCacheArchives(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Set the configuration with the given set of archives.
- setCacheFiles(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Set the configuration with the given set of files.
- setCancelDelegationTokenUponJobCompletion(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Sets the flag that will allow the JobTracker to cancel the HDFS delegation
tokens upon job completion.
- setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the combiner class for the job.
- setCombinerKeyGroupingComparator(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user defined RawComparator
comparator for
grouping keys in the input to the combiner.
- setCombinerKeyGroupingComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setCompressMapOutput(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Should the map outputs be compressed before transfer?
Uses the SequenceFile compression.
- setCompressOutput(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set whether the output of the job is compressed.
- setCompressOutput(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set whether the output of the job is compressed.
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.DefaultTaskController
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.lib.InputSampler
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
configure the filter according to configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
configure the filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
configure the Filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.Task
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.TaskController
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.tools.MRAdmin
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.tools.MRHAAdmin
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
configure the filter according to configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
configure the filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
configure the Filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Read in the partition file and build indexing data structures.
- setConf(Configuration) - Method in class org.apache.hadoop.streaming.DumpTypedBytes
-
- setConf(Configuration) - Method in class org.apache.hadoop.streaming.LoadTypedBytes
-
- setConf(Configuration) - Method in class org.apache.hadoop.streaming.StreamJob
-
- setConf(Configuration) - Method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
-
- setCounters(Counters) - Method in class org.apache.hadoop.mapred.TaskStatus
-
Set the task's counters.
- setCountersEnabled(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Enables or disables counters for the named outputs.
- setCountersEnabled(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Enables or disables counters for the named outputs.
- setCpuFrequency(long) - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Set the CPU frequency of this TaskTracker
If the input is not a valid number, it will be set to UNAVAILABLE
- setCpuUsage(float) - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Set the CPU usage on this TaskTracker
- setCumulativeCpuTime(long) - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Set the cumulative CPU time on this TaskTracker since it is up
It can be set to UNAVAILABLE if it is currently unavailable.
- setDelete(Term) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Set the instance to be a delete operation.
- setDiagnosticInfo(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- setDisplayName(String) - Method in interface org.apache.hadoop.mapreduce.Counter
-
Deprecated.
(and no-op by default)
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounter
-
Deprecated.
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- setDisplayName(String) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Set the display name of the group
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
Deprecated.
- setDistributionPolicyClass(Class<? extends IDistributionPolicy>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the distribution policy class.
- setDocumentAnalyzerClass(Class<? extends Analyzer>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the analyzer class.
- setDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
Set the given counter to the given value
- setErrOut(PrintStream) - Method in class org.apache.hadoop.mapred.tools.MRHAAdmin
-
- setEventId(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
set event Id.
- setExecFinishTime(long) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Set the exec finish time
- setExecStartTime(long) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Set the exec start time
- setExecutable(JobConf, String) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set the URI for the application's executable.
- setFailureInfo(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
set the reason for failuire of this job
- setFileTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
This is to check the timestamp of the files to be localized.
- setFilterClass(Configuration, Class) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
-
set the filter class
- setFilterClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
set the filter class
- setFinalSync(JobConf, boolean) - Static method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
-
Set the requirement for a final sync before the stream is closed.
- setFormat(JobConf) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Interpret a given string as a composite expression.
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
set the filtering frequency in configuration
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
set the frequency and stores it in conf
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
set the filtering frequency in configuration
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
set the frequency and stores it in conf
- setGenericConf(Configuration, String, String, String...) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- setGroupingComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setID(int) - Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setIncludeAllCounters(boolean) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setIndexInputFormatClass(Class<? extends InputFormat>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the index input format class.
- setIndexMaxFieldLength(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the max field length for a Lucene instance.
- setIndexMaxNumSegments(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the max number of segments for a Lucene instance.
- setIndexShards(String) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the string representation of a number of shards.
- setIndexShards(IndexUpdateConfiguration, Shard[]) - Static method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- setIndexUpdaterClass(Class<? extends IIndexUpdater>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the index updater class.
- setIndexUseCompoundFile(boolean) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set whether use the compound file format for a Lucene instance.
- setInput(JobConf, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(JobConf, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(Job, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Note that the "orderBy" column is called the "splitBy" in this version.
- setInput(Job, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
setInput() takes a custom query and a separate "bounding query" to use
instead of the custom "count query" used by DBInputFormat.
- setInput(Job, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(Job, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInputBoundingQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputClass(Class<? extends DBWritable>) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputConditions(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputCountQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputDataLength(long) - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- setInputDataLocations(String[]) - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- setInputFieldNames(String...) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputFormat(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the
InputFormat
implementation for the map-reduce job.
- setInputFormatClass(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setInputOrderBy(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputPathFilter(JobConf, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Set a PathFilter to be applied to the input paths for the map-reduce job.
- setInputPathFilter(Job, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set a PathFilter to be applied to the input paths for the map-reduce job.
- setInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Sets the given comma separated paths as the list of inputs
for the map-reduce job.
- setInputPaths(JobConf, Path...) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Set the array of Path
s as the list of inputs
for the map-reduce job.
- setInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Sets the given comma separated paths as the list of inputs
for the map-reduce job.
- setInputPaths(Job, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the array of Path
s as the list of inputs
for the map-reduce job.
- setInputQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputSplit(InputSplit) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setInputTableName(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputWriterClass(Class<? extends InputWriter>) - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
- setInsert(Document) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Set the instance to be an insert operation.
- setInstrumentationClass(Configuration, Class<? extends JobTrackerInstrumentation>) - Static method in class org.apache.hadoop.mapred.JobTracker
-
- setInstrumentationClass(Configuration, Class<? extends TaskTrackerInstrumentation>) - Static method in class org.apache.hadoop.mapred.TaskTracker
-
- setIOSortMB(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the IO sort space in MB.
- setIsCleanup(boolean) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Set whether the task is a cleanup attempt or not.
- setIsJavaMapper(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the Mapper is written in Java.
- setIsJavaRecordReader(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the job is using a Java RecordReader.
- setIsJavaRecordWriter(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the job will use a Java RecordWriter.
- setIsJavaReducer(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the Reducer is written in Java.
- setJar(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user jar for the map-reduce job.
- setJarByClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the job's jar file by finding an example class location.
- setJarByClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the Jar by finding where a given class came from.
- setJob(Job) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the mapreduce job
- setJobACLs(Map<JobACL, AccessControlList>) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the job acls
- setJobCleanupTask() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- setJobConf(JobConf) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
Set the mapred job conf for this job.
- setJobConf() - Method in class org.apache.hadoop.streaming.StreamJob
-
- setJobEndNotificationURI(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- setJobFile(String) - Method in class org.apache.hadoop.mapred.Task
-
- setJobID(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the job ID for this job.
- setJobID(JobID) - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Set the JobID.
- setJobName(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user-specified job name.
- setJobName(String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the user-specified job name.
- setJobName(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the job name for this job.
- setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobConf
-
- setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the priority of the job, defaulting to NORMAL.
- setJobPriority(JobID, String) - Method in class org.apache.hadoop.mapred.JobTracker
-
- setJobPriority(JobID, String) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- setJobPriority(String) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Set the priority of a running job.
- setJobSetupTask() - Method in class org.apache.hadoop.mapred.TaskInProgress
-
- setJobState(ControlledJob.State) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the state for this job.
- setJobToken(Token<? extends TokenIdentifier>, Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
store job token
- setJobTokenSecret(SecretKey) - Method in class org.apache.hadoop.mapred.Task
-
Set the job token secret
- setJtHaRpcAddress(Configuration, String) - Static method in class org.apache.hadoop.mapred.HAUtil
-
- setJtRpcAddress(Configuration) - Static method in class org.apache.hadoop.mapred.HAUtil
-
Set the JT address from the RPC address so that the wrapped JobTracker
starts on the correct address.
- setJvmContext(JvmContext) - Method in class org.apache.hadoop.mapred.Task
-
Set the task JvmContext
- setKeepCommandFile(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether to keep the command file for debugging
- setKeepFailedTaskFiles(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should keep the intermediate files for
failed tasks.
- setKeepTaskFilesPattern(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set a regular expression for task names that should be kept.
- setKeyComparator(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setKeyFieldComparatorOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
-
- setKeyFieldComparatorOptions(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- setKeyFieldPartitionerOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
-
- setKeyFieldPartitionerOptions(Job, String) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- setKeyValue(Text, Text, byte[], int, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- setLastSeen(long) - Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- setLeftOffset(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[offset:]
in Python syntax.
- setLocalAnalysisClass(Class<? extends ILocalAnalysis>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
-
Set the local analysis class.
- setLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Set the conf to contain the location for localized archives.
- setLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Set the conf to contain the location for localized files.
- setLocalMaxRunningMaps(JobContext, int) - Static method in class org.apache.hadoop.mapred.LocalJobRunner
-
Set the max number of map tasks to run concurrently in the LocalJobRunner.
- setLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
Set the given counter to the given value
- setMapDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the debug script to run when the map tasks fail.
- setMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the given class as the CompressionCodec
for the map outputs.
- setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the key class for the map output data.
- setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the key class for the map output data.
- setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the value class for the map output data.
- setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the value class for the map output data.
- setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the
Mapper
class for the job.
- setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setMapperClass(Job, Class<? extends Mapper<K1, V1, K2, V2>>) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Set the application's mapper class.
- setMapperMaxSkipRecords(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- setMapRunnerClass(Class<? extends MapRunnable>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMapSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job for map tasks.
- setMapSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job for map tasks.
- setMaxInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the maximum split size
- setMaxItems(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
Set the limit on the number of unique values
- setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the number of maximum attempts that will be made to run a
map task.
- setMaxMapTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the maximum percentage of map tasks that can fail without the
job being aborted.
- setMaxPhysicalMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the number of maximum attempts that will be made to run a
reduce task.
- setMaxReduceTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the maximum percentage of reduce tasks that can fail without the job
being aborted.
- setMaxSplitSize(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
Specify the maximum size (in bytes) of each split.
- setMaxSplitSize(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the maximum size (in bytes) of each split.
- setMaxTaskFailuresPerTracker(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the maximum no.
- setMaxVirtualMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMemoryForMapTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMemoryForReduceTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMessage(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the message for this job.
- setMinInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the minimum input split size
- setMinSplitSize(long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- setMinSplitSizeNode(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per node.
- setMinSplitSizeNode(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per node.
- setMinSplitSizeRack(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per rack.
- setMinSplitSizeRack(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per rack.
- setNetworkProperties() - Method in class org.apache.hadoop.contrib.failmon.LogParser
-
- setNextRecordRange(SortedRanges.Range) - Method in class org.apache.hadoop.mapred.TaskStatus
-
Set the next record range which is going to be processed by Task.
- setNumberOfThreads(Job, int) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Set the number of threads in the pool for running maps.
- setNumLinesPerSplit(Job, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Set the number of lines per split
- setNumMapTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the number of map tasks for this job.
- setNumProcessors(int) - Method in class org.apache.hadoop.mapred.TaskTrackerStatus.ResourceStatus
-
Set the number of processors on this TaskTracker
If the input is not a valid number, it will be set to UNAVAILABLE
- setNumReduceTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the requisite number of reduce tasks for this job.
- setNumReduceTasks(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the number of reduce tasks for the job.
- setNumTasksToExecutePerJvm(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Sets the number of tasks that a spawned task JVM should run
before it exits
- setOffsets(Configuration, int, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[left:(right+1)]
in Python syntax.
- setOp(DocumentAndOp.Op) - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
Set the type of the operation.
- setOut(PrintStream) - Method in class org.apache.hadoop.mapred.tools.MRHAAdmin
-
- setOutput(JobConf, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with the appropriate output settings
- setOutput(JobConf, String, int) - Static method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with the appropriate output settings
- setOutput(Job, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with
the appropriate output settings
- setOutput(Job, String, int) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job
with the appropriate output settings
- setOutputCommitter(Class<? extends OutputCommitter>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setOutputCompressionType(JobConf, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Set the SequenceFile.CompressionType
for the output SequenceFile
.
- setOutputCompressionType(Job, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
Set the SequenceFile.CompressionType
for the output SequenceFile
.
- setOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set the CompressionCodec
to be used to compress job outputs.
- setOutputCompressorClass(Job, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the CompressionCodec
to be used to compress job outputs.
- setOutputFieldCount(int) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputFieldNames(String...) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputFormat(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setOutputFormatClass(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setOutputFormatClass(Job, Class<? extends OutputFormat>) - Static method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
Set the underlying output format for LazyOutputFormat.
- setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the key class for the job output data.
- setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the key class for the job output data.
- setOutputKeyClass(Class) - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
Sets the output key class class.
- setOutputKeyComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the RawComparator
comparator used to compare keys.
- setOutputName(JobContext, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the base output name for output file to be created.
- setOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set the Path
of the output directory for the map-reduce job.
- setOutputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the Path
of the output directory for the map-reduce job.
- setOutputReaderClass(Class<? extends OutputReader>) - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
- setOutputTableName(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the value class for job outputs.
- setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the value class for job outputs.
- setOutputValueClass(Class) - Method in class org.apache.hadoop.streaming.io.IdentifierResolver
-
Sets the output value class.
- setOutputValueGroupingComparator(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user defined RawComparator
comparator for
grouping keys in the input to the reduce.
- setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setPartitionFile(JobConf, Path) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
Set the path to the SequenceFile storing the sorted partition keyset.
- setPartitionFile(Configuration, Path) - Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Set the path to the SequenceFile storing the sorted partition keyset.
- setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
Define the filtering regex and stores it in conf
- setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
Define the filtering regex and stores it in conf
- setPhase(TaskStatus.Phase) - Method in class org.apache.hadoop.mapred.Task
-
Set current phase of the task.
- setPrinter(DancingLinks.SolutionAcceptor<Pentomino.ColumnName>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Set the printer for the puzzle.
- setPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobInProgress
-
- setProfileEnabled(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the system should collect profiler information for some of
the tasks in this job? The information is stored in the user log
directory.
- setProfileParams(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the profiler configuration arguments.
- setProfileTaskRange(boolean, String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the ranges of maps or reduces to profile.
- setProgress(float) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setProgress(float) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setProperty(String, String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
-
Sets the value of a property inthe configuration file.
- setQueueName(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the name of the queue to which this job should be submitted.
- setQueueName(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the queue name of the JobQueueInfo
- setQueueState(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the state of the queue
- setReduceDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the debug script to run when the reduce tasks fail.
- setReducer(JobConf, Class<? extends Reducer<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Sets the Reducer class to the chain job's JobConf.
- setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setReducerMaxSkipGroups(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- setReduceSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job for reduce tasks.
- setReduceSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job for reduce tasks.
- setRightOffset(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[:(offset+1)]
in Python syntax.
- setRunningTaskAttempts(Collection<TaskAttemptID>) - Method in class org.apache.hadoop.mapred.TaskReport
-
set running attempt(s) of the task.
- setRunState(int) - Method in class org.apache.hadoop.mapred.JobStatus
-
Change the current run state of the job.
- setRunState(TaskStatus.State) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setSchedulingInfo(Object) - Method in class org.apache.hadoop.mapred.JobInProgress
-
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the scheduling information associated to particular job queue
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
Used to set the scheduling information associated to a particular Job.
- setSequenceFileOutputKeyClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Set the key class for the SequenceFile
- setSequenceFileOutputKeyClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Set the key class for the SequenceFile
- setSequenceFileOutputValueClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Set the value class for the SequenceFile
- setSequenceFileOutputValueClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Set the value class for the SequenceFile
- setSessionId(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user-specified session identifier.
- setSessionTimeZone(Configuration, Connection) - Static method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Set session time zone
- setSigKillInterval(long) - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
-
- setSizes(long[]) - Method in class org.apache.hadoop.filecache.TaskDistributedCacheManager
-
- setSkipOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the directory to which skipped records are written.
- setSkipping(boolean) - Method in class org.apache.hadoop.mapred.Task
-
Sets whether to run Task in skipping mode.
- setSkipRanges(SortedRanges) - Method in class org.apache.hadoop.mapred.Task
-
Set skipRanges.
- setSortComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
Define the comparator that controls how the keys are sorted before they
are passed to the
Reducer
.
- setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job.
- setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job.
- setState(ParseState) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
-
Set the state of parsing for a particular log file.
- setStatement(PreparedStatement) - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- setStateString(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setStatus(String) - Method in interface org.apache.hadoop.mapred.Reporter
-
Set the status description for the task.
- setStatus(String) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Deprecated.
Set the current status of the task to the given string.
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- setStatus(TaskTrackerStatus) - Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Set the current status of the task to the given string.
- setStatus(String) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Set the current status of the task to the given string.
- setStatusString(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- setSuccessEventNumber(int) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Set the event number that was raised for this tip
- setSuccessfulAttempt(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskReport
-
set successful attempt ID of the task.
- setTag(Text) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- setTaskID(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Sets task id.
- setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setTaskOutputFilter(JobClient.TaskStatusFilter) - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
- setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) - Static method in class org.apache.hadoop.mapred.JobClient
-
Modify the JobConf to set the task output filter.
- setTaskRunTime(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set the task completion time
- setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set task status.
- setTaskTracker(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set task tracker http location.
- setTotalLogFileSize(long) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setup(LocalDirAllocator, TaskTracker.LocalStorage) - Method in class org.apache.hadoop.mapred.DefaultTaskController
-
- setup(LocalDirAllocator, TaskTracker.LocalStorage) - Method in class org.apache.hadoop.mapred.TaskController
-
Does initialization and setup.
- setup(Mapper<K, V, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionMapper
-
- setup(Reducer<Text, Text, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionReducer
-
- setup(Mapper<K1, V1, K2, V2>.Context) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingMapper
-
- setup(Mapper<K, Text, Text, LongWritable>.Context) - Method in class org.apache.hadoop.mapreduce.lib.map.RegexMapper
-
- setup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the beginning of the task.
- setup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the start of the task.
- setupCache(Configuration, String, String) - Method in class org.apache.hadoop.filecache.TaskDistributedCacheManager
-
Retrieve public distributed cache files into the local cache and updates
the task configuration (which has been passed in via the constructor).
- setUpdate(Document, Term) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
Set the instance to be an update operation.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For the framework to setup the job output during initialization
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Create the temporary directory that is the root of all of the task
work directories.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For the framework to setup the job output during initialization
- setupJobConf(int, int, long, int, long, int) - Method in class org.apache.hadoop.examples.SleepJob
-
- setupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- setupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's setup-tasks, as a float between 0.0
and 1.0.
- setupProgress() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the progress of the job's setup, as a float between 0.0
and 1.0.
- setupSecureConnection(ReduceTask.ReduceCopier<K, V>.MapOutputLocation, URLConnection) - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.MapOutputCopier
-
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
Sets up output for the task.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
No task setup required.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
Sets up output for the task.
- setUseNewMapper(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should use the new api for the mapper.
- setUseNewReducer(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should use the new api for the reducer.
- setUser(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the reported username for this job.
- setUserClassesTakesPrecedence(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the boolean property for specifying which classpath takes precedence -
the user's one or the system one, when the tasks are launched
- setUserClassesTakesPrecedence(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the boolean property for specifying which classpath takes precedence -
the user's one or the system one, when the tasks are launched
- setValue(long) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- setValue(long) - Method in interface org.apache.hadoop.mapreduce.Counter
-
Set this counter by the given value
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- setValue(Object) - Method in class org.apache.hadoop.typedbytes.TypedBytesWritable
-
Set the typed bytes from a given Java object.
- setVerbose(boolean) - Method in class org.apache.hadoop.streaming.JarBuilder
-
- setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the current working directory for the default file system.
- setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the current working directory for the default file system.
- setWriteAllCounters(boolean) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Set the "writeAllCounters" option to true or false
- setWriter(IFile.Writer<K, V>) - Method in class org.apache.hadoop.mapred.Task.CombineOutputCollector
-
- setWriteSkipRecs(boolean) - Method in class org.apache.hadoop.mapred.Task
-
Set whether to write skip records.
- setZkfcPort(int) - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceTarget
-
- Shard - Class in org.apache.hadoop.contrib.index.mapred
-
This class represents the metadata of a shard.
- Shard() - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
-
Constructor.
- Shard(long, String, long) - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
-
Construct a shard from a versio number, a directory and a generation
number.
- Shard(Shard) - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
-
Construct using a shard object.
- ShardWriter - Class in org.apache.hadoop.contrib.index.lucene
-
The initial version of an index is stored in the perm dir.
- ShardWriter(FileSystem, Shard, String, IndexUpdateConfiguration) - Constructor for class org.apache.hadoop.contrib.index.lucene.ShardWriter
-
Constructor
- ShellParser - Class in org.apache.hadoop.contrib.failmon
-
Objects of this class parse the output of system command-line
utilities that can give information about the state of
various hardware components in the system.
- ShellParser() - Constructor for class org.apache.hadoop.contrib.failmon.ShellParser
-
- shippedCanonFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
-
- shouldClose(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Returns whether a component task-thread should be
closed because the containing JobInProgress has completed
or the task is killed by the user
- shouldCommit(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskInProgress
-
Returns whether the task attempt should be committed or not
- shouldDie() - Method in class org.apache.hadoop.mapred.JvmTask
-
- shouldReset() - Method in class org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate
-
- shouldTruncateLogs(JVMInfo) - Method in class org.apache.hadoop.mapred.TaskLogsTruncater
-
Check the log file sizes generated by the attempts that ran in a
particular JVM
- shuffle(ReduceTask.ReduceCopier<K, V>.MapOutputCopier, ReduceTask.ReduceCopier<K, V>.MapOutputLocation, URLConnection, InputStream, ReduceTask.ReduceCopier<K, V>.ShuffleClientMetrics, Path, long, long) - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier
-
- SHUFFLE_CONSUMER_PLUGIN_ATTR - Static variable in interface org.apache.hadoop.mapreduce.JobContext
-
- SHUFFLE_PROVIDER_PLUGIN_CLASSES - Static variable in class org.apache.hadoop.mapred.TaskTracker
-
- SHUFFLE_SSL_ADDRESS_DEFAULT - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- SHUFFLE_SSL_ADDRESS_KEY - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- SHUFFLE_SSL_ENABLED_DEFAULT - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- SHUFFLE_SSL_ENABLED_KEY - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- SHUFFLE_SSL_PORT_DEFAULT - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- SHUFFLE_SSL_PORT_KEY - Static variable in class org.apache.hadoop.mapred.JobTracker
-
- ShuffleConsumerPlugin - Interface in org.apache.hadoop.mapred
-
- ShuffleConsumerPlugin.Context - Class in org.apache.hadoop.mapred
-
- ShuffleConsumerPlugin.Context(TaskUmbilicalProtocol, JobConf, Task.TaskReporter, ReduceTask) - Constructor for class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- shuffleError(TaskAttemptID, String, JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
A reduce-task failed to shuffle the map-outputs.
- shuffleError(TaskAttemptID, String, JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report that a reduce-task couldn't shuffle map-outputs.
- ShuffleProviderPlugin - Interface in org.apache.hadoop.mapred
-
This interface is implemented by objects that are able to answer shuffle requests which are
sent from a matching Shuffle Consumer that lives in context of a ReduceTask object.
- shutdown() - Method in class org.apache.hadoop.mapred.TaskTracker
-
- shutdown() - Method in class org.apache.hadoop.util.MRAsyncDiskService
-
Gracefully start the shut down of all ThreadPools.
- shutdownNow() - Method in class org.apache.hadoop.util.MRAsyncDiskService
-
Shut down all ThreadPools immediately.
- signalTask(String, int, ProcessTree.Signal) - Method in class org.apache.hadoop.mapred.DefaultTaskController
-
- signalTask(String, int, ProcessTree.Signal) - Method in class org.apache.hadoop.mapred.TaskController
-
Send a signal to a task pid as the user.
- SingleArgumentRunnable<T> - Interface in org.apache.hadoop.util
-
Simple interface for a Runnable that takes a single argument.
- size() - Method in class org.apache.hadoop.mapred.Counters.Group
-
Deprecated.
- size() - Method in class org.apache.hadoop.mapred.Counters
-
- size() - Method in class org.apache.hadoop.mapred.join.TupleWritable
-
The number of children in this Tuple.
- size() - Method in class org.apache.hadoop.mapred.SpillRecord
-
Return number of IndexRecord entries in this spill.
- size() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- size() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
- size() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- size() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- skip(K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
-
Skip key-value pairs with keys less than or equal to the key provided.
- skip(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Pass skip key to child RRs.
- skip(K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Skip key-value pairs with keys less than or equal to the key provided.
- SkipBadRecords - Class in org.apache.hadoop.mapred
-
Utility class for skip bad records functionality.
- SkipBadRecords() - Constructor for class org.apache.hadoop.mapred.SkipBadRecords
-
- skipType() - Method in class org.apache.hadoop.typedbytes.TypedBytesInput
-
Skips a type byte.
- SleepJob - Class in org.apache.hadoop.examples
-
Dummy class for testing MR framefork.
- SleepJob() - Constructor for class org.apache.hadoop.examples.SleepJob
-
- SleepJob.EmptySplit - Class in org.apache.hadoop.examples
-
- SleepJob.EmptySplit() - Constructor for class org.apache.hadoop.examples.SleepJob.EmptySplit
-
- SleepJob.SleepInputFormat - Class in org.apache.hadoop.examples
-
- SleepJob.SleepInputFormat() - Constructor for class org.apache.hadoop.examples.SleepJob.SleepInputFormat
-
- SMARTParser - Class in org.apache.hadoop.contrib.failmon
-
Objects of this class parse the output of smartmontools to
gather information about the state of disks in the system.
- SMARTParser() - Constructor for class org.apache.hadoop.contrib.failmon.SMARTParser
-
Constructs a SMARTParser and reads the list of disk
devices to query
- solution(List<List<ColumnName>>) - Method in interface org.apache.hadoop.examples.dancing.DancingLinks.SolutionAcceptor
-
A callback to return a solution to the application.
- solve(int[], DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
-
Given a prefix, find solutions under it.
- solve(DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
-
Solve a complete problem
- solve(int[]) - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Find all of the solutions that start with the given prefix.
- solve() - Method in class org.apache.hadoop.examples.dancing.Pentomino
-
Find all of the solutions to the puzzle.
- solve() - Method in class org.apache.hadoop.examples.dancing.Sudoku
-
- Sort<K,V> - Class in org.apache.hadoop.examples
-
This is the trivial map/reduce program that does absolutely nothing
other than use the framework to fragment and sort the input values.
- Sort() - Constructor for class org.apache.hadoop.examples.Sort
-
- SOURCE_TAGS_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- specToString(String, String, int, List<Integer>, List<Integer>) - Static method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- spilledRecordsCounter - Variable in class org.apache.hadoop.mapred.Task
-
- SpillRecord - Class in org.apache.hadoop.mapred
-
- SpillRecord(int) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- SpillRecord(Path, JobConf, String) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- SpillRecord(Path, JobConf, Checksum, String) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- split(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
-
Generate a list of row choices to cover the first moves.
- split - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.BigDecimalSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.BooleanSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.DateSplitter
-
- split(Configuration, ResultSet, String) - Method in interface org.apache.hadoop.mapreduce.lib.db.DBSplitter
-
Given a ResultSet containing one record (and already advanced to that record)
with two columns (a low value, and a high value, both of the same type), determine
a set of splits that span the given values.
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.FloatSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.IntegerSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.TextSplitter
-
This method needs to determine the splits between two user-provided strings.
- split - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- splitKeyVal(byte[], int, int, Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
-
split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
-
split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- splitKeyVal(byte[], Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
-
split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
-
split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- splitKeyVal(byte[], int, int, Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
-
- splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
-
- splitKeyVal(byte[], Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
-
- splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
-
- SplitLineReader - Class in org.apache.hadoop.mapreduce.lib.input
-
- SplitLineReader(InputStream, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.SplitLineReader
-
- SplitLineReader(InputStream, Configuration, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.SplitLineReader
-
- SplitMetaInfoReader - Class in org.apache.hadoop.mapreduce.split
-
A (internal) utility that reads the split meta info and creates
split meta info objects
- SplitMetaInfoReader() - Constructor for class org.apache.hadoop.mapreduce.split.SplitMetaInfoReader
-
- SshFenceByTcpPort - Class in org.apache.hadoop.mapred
-
NOTE: This is a copy of org.apache.hadoop.ha.SshFenceByTcpPort that uses
MR-specific configuration options (since the original is hardcoded to HDFS
configuration properties so there is no way to run MR and HDFS fencing using
a single configuration file).
- SshFenceByTcpPort() - Constructor for class org.apache.hadoop.mapred.SshFenceByTcpPort
-
- SshFenceByTcpPort.Args - Class in org.apache.hadoop.mapred
-
Container for the parsed arg line for this fencing method.
- SshFenceByTcpPort.Args(String) - Constructor for class org.apache.hadoop.mapred.SshFenceByTcpPort.Args
-
- sshPort - Variable in class org.apache.hadoop.mapred.SshFenceByTcpPort.Args
-
- start() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- start() - Method in class org.apache.hadoop.mapred.JobTrackerHAHttpRedirector
-
- start(Object) - Method in class org.apache.hadoop.mapred.JobTrackerPlugin
-
- start() - Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
-
Starts managing the logs
- startCleanupThread() - Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
Start the background thread
- startCommunicationThread() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- startJobTracker(JobConf) - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon.JobTrackerRunner
-
- startMap(String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
-
- startMap(TreeMap, String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
-
- startNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- startOffset - Variable in class org.apache.hadoop.mapred.IndexRecord
-
- startRecord(String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
-
- startRecord(Record, String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
-
- startService() - Static method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- startTracker(JobConf) - Static method in class org.apache.hadoop.mapred.JobTracker
-
Start the JobTracker with given configuration.
- startTracker(JobConf, String) - Static method in class org.apache.hadoop.mapred.JobTracker
-
- startVector(String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
-
- startVector(ArrayList, String) - Method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
-
- StatusReporter - Class in org.apache.hadoop.mapreduce
-
- StatusReporter() - Constructor for class org.apache.hadoop.mapreduce.StatusReporter
-
- statusUpdate(TaskUmbilicalProtocol) - Method in class org.apache.hadoop.mapred.Task
-
- statusUpdate(TaskAttemptID, TaskStatus, JvmContext) - Method in class org.apache.hadoop.mapred.TaskTracker
-
Called periodically to report Task progress, from 0.0 to 1.0.
- statusUpdate(TaskAttemptID, TaskStatus, JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report child's progress to parent.
- stop() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon
-
- stop() - Method in class org.apache.hadoop.mapred.JobTrackerHAHttpRedirector
-
- stop() - Method in class org.apache.hadoop.mapred.JobTrackerHAServiceProtocol
-
- stop() - Method in class org.apache.hadoop.mapred.JobTrackerPlugin
-
- stop() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
set the thread state to STOPPING so that the
thread will stop when it wakes up.
- stopCleanupThread() - Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
Stop the background thread
- stopCommunicationThread() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- stopJobTracker() - Method in class org.apache.hadoop.mapred.JobTrackerHADaemon.JobTrackerRunner
-
- stopNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- stopRunning() - Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager.CleanupThread
-
- stopTracker() - Method in class org.apache.hadoop.mapred.JobTracker
-
- StreamBackedIterator<X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapred.join
-
This class provides an implementation of ResetableIterator.
- StreamBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- StreamBaseRecordReader - Class in org.apache.hadoop.streaming
-
Shared functionality for hadoopStreaming formats.
- StreamBaseRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- StreamInputFormat - Class in org.apache.hadoop.streaming
-
An input format that selects a RecordReader based on a JobConf property.
- StreamInputFormat() - Constructor for class org.apache.hadoop.streaming.StreamInputFormat
-
- StreamJob - Class in org.apache.hadoop.streaming
-
All the client-side work happens here.
- StreamJob(String[], boolean) - Constructor for class org.apache.hadoop.streaming.StreamJob
-
- StreamJob() - Constructor for class org.apache.hadoop.streaming.StreamJob
-
- StreamKeyValUtil - Class in org.apache.hadoop.streaming
-
- StreamKeyValUtil() - Constructor for class org.apache.hadoop.streaming.StreamKeyValUtil
-
- StreamUtil - Class in org.apache.hadoop.streaming
-
Utilities not available elsewhere in Hadoop.
- StreamUtil() - Constructor for class org.apache.hadoop.streaming.StreamUtil
-
- StreamXmlRecordReader - Class in org.apache.hadoop.streaming
-
A way to interpret XML fragments as Mapper input records.
- StreamXmlRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- stringifySolution(int, int, List<List<Pentomino.ColumnName>>) - Static method in class org.apache.hadoop.examples.dancing.Pentomino
-
Convert a solution to the puzzle returned by the model into a string
that represents the placement of the pieces onto the board.
- StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
-
This class implements a value aggregator that maintain the biggest of
a sequence of strings.
- StringValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
the default constructor
- StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
-
This class implements a value aggregator that maintain the smallest of
a sequence of strings.
- StringValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
the default constructor
- SUBDIR - Static variable in class org.apache.hadoop.mapred.TaskTracker
-
- submit() - Method in class org.apache.hadoop.mapreduce.Job
-
Submit the job to the cluster and return immediately.
- submit() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Submit this job to mapred.
- submitAndMonitorJob() - Method in class org.apache.hadoop.streaming.StreamJob
-
- submitJob(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Submit a job to the MR system.
- submitJob(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
-
Submit a job to the MR system.
- submitJob(JobID, String, Credentials) - Method in class org.apache.hadoop.mapred.JobTracker
-
JobTracker.submitJob() kicks off a new job.
- submitJob(JobID, String, Credentials) - Method in class org.apache.hadoop.mapred.LocalJobRunner
-
- submitJob(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
- submitJobInternal(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
-
Internal method for submitting jobs to the system.
- Submitter - Class in org.apache.hadoop.mapred.pipes
-
The main entry point and job submitter.
- Submitter() - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
-
- Submitter(Configuration) - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
-
- SUBSTITUTE_TOKEN - Static variable in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
If users are providing their own query, the following string is expected to
appear in the WHERE clause, which will be substituted with a pair of conditions
on the input to allow input splits to parallelise the import.
- SUCCEEDED - Static variable in class org.apache.hadoop.mapred.JobStatus
-
- SUCCEEDED_FILE_NAME - Static variable in class org.apache.hadoop.mapred.FileOutputCommitter
-
- SUCCEEDED_FILE_NAME - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- SUCCESS - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- successFetch() - Method in class org.apache.hadoop.mapred.ReduceTask.ReduceCopier.ShuffleClientMetrics
-
- Sudoku - Class in org.apache.hadoop.examples.dancing
-
This class uses the dancing links algorithm from Knuth to solve sudoku
puzzles.
- Sudoku(InputStream) - Constructor for class org.apache.hadoop.examples.dancing.Sudoku
-
Set up a puzzle board to the given size.
- Sudoku.ColumnName - Interface in org.apache.hadoop.examples.dancing
-
This interface is a marker class for the columns created for the
Sudoku solver.
- sum(Counters, Counters) - Static method in class org.apache.hadoop.mapred.Counters
-
Deprecated.
Convenience method for computing the sum of two sets of counters.
- supportIsolationRunner(JobConf) - Method in class org.apache.hadoop.mapred.Task
-
- suspend() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
suspend the running thread
- swap(int, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
Swap logical indices st i, j MOD offset capacity.
- syncLogs(String, TaskAttemptID, boolean, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
-
- SYSTEM_DIR_SEQUENCE_PREFIX - Static variable in class org.apache.hadoop.mapred.JobTrackerHAServiceProtocol
-
- SystemLogParser - Class in org.apache.hadoop.contrib.failmon
-
An object of this class parses a Unix system log file to create
appropriate EventRecords.
- SystemLogParser(String) - Constructor for class org.apache.hadoop.contrib.failmon.SystemLogParser
-
Create a new parser object .