Uploaded image for project: 'Spigot'
  1. Spigot
  2. SPIGOT-2385

FileIOThread will cause chunk overwrites/data loss on bigger servers

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • None
    • None
    • None

      Hey, okay this is a bigger one and took a long time to find. I am pretty sure this is a vanilla bug too (but very hard to test). I want to validate my concerns here before reporting to mojang.

      Groundwork:

      • FileIOThread is the BackgroundService to save already unloaded Chunks to the specific RegionFile. This service is async
      • RegionFileCache holds a bunch of cached RegionFiles for loading/unloading data
      • ChunkRegionLoader uses RegionFileCache for different lookups/loading/unloading

      Problem:
      If the maximum loaded RegionFiles in RegionFileCache is reached. A clear-method is called. This call is unsafe and will result in chunk-regenerations in this specific situation.

      Reproduction:
      Change the RegionFileCache limit from 256 to 2. This should be enough to spam the console with saving-errors as the syncronization-code in RegionFileCache is faulty implemented.

                  if (RegionFileCache.a.size() >= 2) {
                      a();
                  }
      

      Reprodution of chunk-regenerations:
      Best way is to change the limit to 2 as seen above

      1. Create a flat world and generate enough chunks in an area
      2. Create a normal world
      3. Copy all the regionfiles from the flat world to the normal world (dont overwrite the level.dat)
      4. Start up the server and fly around in gamemode 3.
      5. The console will be full of errors. About every 1-2 minutes there will be a newly generated chunk will appear in the previously flat area.

      One way to fixing this:
      As the general implementation of RegionFileCache is faulty the method "c" and "d" need to be rewritten. They are the problem as they hand out *references *to region files. Instead we can change them to hand out the NBTData directly and mark them syncronized. With this setup the syncronization actually works:

          // Kade possibly broken by FileIOThread! too
          @Deprecated
          public static DataInputStream c(File file, int i, int j) {
              RegionFile regionfile = a(file, i, j);
      
              return regionfile.a(i & 31, j & 31);
          }
      
          // Kade is broken by FileIOThread! This will return a reference to a file that may be removed before it is used!
          @Deprecated
          public static DataOutputStream d(File file, int i, int j) {
              RegionFile regionfile = a(file, i, j);
      
              return regionfile.b(i & 31, j & 31);
          }
      
          public static synchronized NBTTagCompound fixedc(File file, int i, int j) throws IOException {
              RegionFile regionfile = a(file, i, j);
      		DataInputStream datainputstream = regionfile.a(i & 31, j & 31);
      		if (datainputstream == null) return null; // ChunkRegionLoader loadChunk
      		return NBTCompressedStreamTools.a(datainputstream);
          }
      
          
          public static synchronized void fixedd(File file, int i, int j, NBTTagCompound nbttagcompound) throws IOException {
      		RegionFile regionfile = a(file, i, j);
      		DataOutputStream dataoutputstream = regionfile.b(i & 31, j & 31);
      		NBTCompressedStreamTools.a(nbttagcompound, (DataOutput) dataoutputstream); // from ChunkRegionLoader b(...)
      		dataoutputstream.close();
      	}
      

        1. 2016-06-11_12.40.08.png
          3.01 MB
          Lokio Case
        2. 2016-06-11_16.36.53.png
          2.52 MB
          Lokio Case

            Unassigned Unassigned
            Kademlia Kademlia
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: