[SPIGOT-3606] ArrayIndexOutOfBoundsException [1.10 R1] Created: 09/Oct/17 Updated: 14/Oct/17 Resolved: 09/Oct/17 |
|
Status: | Closed |
Project: | Spigot |
Component/s: | None |
Affects Version/s: | None |
Fix Version/s: | None |
Type: | Bug | Priority: | Minor |
Reporter: | egg82 | Assignee: | Unassigned |
Resolution: | Invalid | Votes: | 0 |
Labels: | 1.10, Craftbukkit, blocks, bug, chunks, error | ||
Environment: |
Data is unknown. Version is slightly faked in order to post this. Sorry. |
Description |
I use an exception handling service in my plugin that allows me to track errors caused by my plugins. Occasionally I get other errors. This is one such example. Stacktrace below: ArrayIndexOutOfBoundsException: 189 getTopWalkableBlock as referenced can be found here with the specific line referenced here. |
Comments |
Comment by Black Hole [ 14/Oct/17 ] |
Spigot is a modified Minecraft vanilla server. So such a change would be impossible to maintain, because it is much more complicated and involves multiple lists/maps and accesses in multiple classes. There are some forks that tried multi threading. They ended with a big patch file that they were unable to merge with a new Minecraft version. |
Comment by egg82 [ 13/Oct/17 ] |
I agree that while the thread comparison check is essentially free, Thread#currentThread may not be. However that's the main body of the issue. Foregoing "synchronized" function blocks due to the performance hit of thread locking, Map and List are completely safe for concurrent reads. So, the alternative I suggested earlier: chunk cache. The real kicker here is that I mentioned it may be difficult to implement correctly. Yesterday, however, I came up with an interesting idea. I usually rave on about the Service Locator pattern and how useful it is, and it may be just as useful here. Instead of one list with read/write access you have a "master" list with write access (only able to be written to safely in the main Bukkit thread, and only written to at the end of the thread's tick as per usual) and a service providing a list with read access. The write list is copied at the end of the tick and an immutable read-list created from the copy. I realize I essentially just described CopyOnWriteArrayList, but with better performance because of the fact that the re-written list can be created on another thread without issue, and the fact that the list wouldn't be cloned on every write; just at the end of the tick. The only synchronicity would be in storage and retrieval of the new list, and that can be fixed with a local cache in each class providing wrappers to the underlying array (refreshed at the end of each tick). Normally an issue here is modifying an object inside the read-only list but since it would be immutable that won't be an issue either. One of Java's perks as a high-level language. Let me know if there's other challenges I missed, however. I realize that this may not make any sense. If I came across well, then great! Otherwise, here's some quick pseudocode. Bukkit.java OtherClass.java |
Comment by md_5 [ 10/Oct/17 ] |
Thread safety is not nearly as simple as just throwing in a synchronised and expecting it to work properly. The entire Bukkit API, except for a few documented exceptions, is not thread safe because the server as a whole is just not and cannot be made robust enough to handle asynchronous access. The moral of the story is that unless you know exactly what you are doing, then do not use any aspect of the Bukkit API async. |
Comment by egg82 [ 10/Oct/17 ] |
I didn't think to check for previous issues related to this one, so my apologies. It does seem as though the issue is related to asynchronous execution of the Location#getBlock function. However after digging into CraftBukkit's source a bit I noticed the ChunkProviderServer class can be easily modified to use "synchronized" on the functions that rely on the internal "chunks" object within it. In fact, the "chunks" object seems to have been made public with no real reason for it to be since the class itself provides all the necessary methods for accessing that object in every way that Bukkit needs. Even if there comes a time where that is no longer the case, it seems like it could be modified to fit new standards quite easily. I propose moving the "chunks" object to private and adding the "synchronized" keyword to functions accessing that object within the ChunkProviderServer class. This would allow asynchronous execution of functions like getBlock (increasing speed) while having no performance impact on Bukkit as a whole. An alternate proposition (if you'd like to avoid the "synchronous" keyword) is creating a read-only cache of the loaded chunks where read accessors would provide the copied version of the loaded chunks and write accessors would write to the underlying chunk cache. The latter would be more difficult to implement and would likely cause issues, however. Now, I understand asking why something like getBlock should be made available for asynchronous access. The reason is because it makes sense in both terms of performance and common sense. When accessing objects for reading (not writing, necessarily) it would make sense that you are able to get a result without causing undefined behavior regardless of synchronicity. For example, if I were using a function that gave me a boolean result as a test before performing a more labor-intensive function I wouldn't expect (nor would I want) that boolean function to throw an exception. In the same manner, when I read blocks or chunks for data I don't expect exceptions to be thrown or undefined behavior to occur when accessed from a different thread. What I do expect is an exception to be thrown when attempting to load or access a chunk that does not exist or write data outside of the main thread; and then I expect that exception to be clear as to why it's being thrown. (as it is currently) In short, bug report turned proposition. I believe read functions accessing "chunks" in ChunkProviderServer should be labeled "synchronous" so read-operations outside of the main thread don't cause undefined behavior and attempted write-operations outside of the main thread should continue to throw exceptions as they have been. Edit: Sorrry md_5, I didn't see your comment until after I posted my paragraphs. To be fair, though, I did make it fairly clear that I'm not the server owner and am instead the plugin author. Having the "please contact the plugin author" line told me it either wasn't fully read or it wasn't fully understood. |
Comment by md_5 [ 10/Oct/17 ] |
Indeed the whole stack trace shows it’s occurring async + the comments about 1.10 Before jumping to attack people helping you, it helps to be correct. In this case you both have control over the error and can fix it. |
Comment by Black Hole [ 10/Oct/17 ] |
This line suggests you're calling the method from async tasks: It's never save to check for chunks or blocks in async tasks. So you should run all this code on the main thread. Once you fixed your code and updated to to the latest version (currently 1.12.2) and still get some issues, feel free to create another issue. |
Comment by egg82 [ 10/Oct/17 ] |
I appreciate the time you take to deliver feedback, but please read the stack trace or description more carefully. In this case, I am a plugin author. My plugin uses error reporting and handling code that sends exceptions with full stack traces to me. In this instance my plugin sent me an error that I neither have control over nor can I fix. One of the servers that my (public) plugin runs on uses 1.10 (R1) and encountered this error in its ChunkProviderServer#getChunkIfLoaded function where it looks like an off-by-one error in its Long2ObjectOpenHashMap#get function. I don't know the internals so I can't comment on exactly why it happened; just that it did. If this issue is already resolved or can be labeled as "wontfix" then of course feel free to close it. I just ask that you read more carefully next time as this (or something like it) might show up in the future. I'm just providing feedback to those who need it where I'm able to. |
Comment by Black Hole [ 09/Oct/17 ] |
1.10 isn't supported anymore, please upgrade to 1.12.2 For community support please use the Spigot forums. Anyway you should ask the author of that plugin for help. |