I am running a hadoop program processing Tera Byte size data. The code was test successfully on a small sample (100G) and it worked. However, when trying it on the full problem, the program freezes forever at Map 99% Reduce 33%. There is no error, and the size of userlog folder is clean (<30M) cause otherwise it will generate Giga bytes of error logs. I checked the log of mapper and reducer, it seems that the reducer is waiting for an output from the mapper and it never reaches. What is the -- Shi Yu
↧