År : Hadoop streaming er et kraftfuldt værktøj, som kommer med Hadoop distribution.The grundlæggende koncept Hadoop rammer er at opdele opgaven,behandle dem i parallel og så slutte den tilbage for at få enden result.So er der to vigtigste komponenter involveret i denne ramme.
a) Kort ansøgning
b) Reducer ansøgning
Den Hadoop streaming værktøj giver dig mulighed for at skrive Kort / Reduce applikationer på ethvert sprog, der er i stand til at arbejde med stdin og stdout.
I read your introduction article about hadoop streaming. I found it really helpful. But I have more questions about how to use it.
One main question I want to ask is if my perl script needs more than one argument, how can I pass them to the command line?
For example, I used the following command, where I used multiple inputs to handle multiple arguments. But in fact, the data input is just the first one. All others are just some resources the perl script needs to read in to help process the first data input.
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mapred.reduce.tasks=0 -D mapred.map.tasks.speculative.execution=false -D mapred.task.timeout=12000000 -input nlp_research/edt_nlp_data/3000001.txt -input shift.txt -input lists -input dict -input nlp_research/deid-1.1/deid.config -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat -output perl_output -mapper deid_mapper.pl -file deid_mapper.pl
If you can give me some guidance, that would be great!