distributed_computing:data_processing:hadoop:hdfs:small_files

Small file problem

Memory overhead

  • File/Directory/Block is represented as object in namenode's memory and occupies 150 bytes. 10.000.000 small files ⇒ (10.000.000 blocks * replication factor + 10.000.000 file inodes ) * 150 bytes ⇒ 3 Gb
  • Namenode is limited by main memory
  • distributed_computing/data_processing/hadoop/hdfs/small_files.txt
  • Last modified: 2019/10/25 19:55
  • by phreazer