Class CoalescingAddRemoveStrategy<T>

java.lang.Object
docking.widgets.table.CoalescingAddRemoveStrategy<T>
Type Parameters:
T - the row type
All Implemented Interfaces:
TableAddRemoveStrategy<T>

public class CoalescingAddRemoveStrategy<T> extends Object implements TableAddRemoveStrategy<T>
The ThreadedTableModel does not correctly function with data that can change outside of the table. For example, if a table uses db objects as row objects, these db objects can be changed by the user and by analysis while table has already been loaded. The problem with this is that the table's sort can be broken when new items are to be added, removed or re-inserted, as this process requires a binary search, which will be broken if the criteria used to sort the data has changed. Effectively, a row object change can break the binary search if that item stays in a previously sorted position, but has updated data that would put the symbol in a new position if sorted again. For example, if the table is sorted on name and the name of an item changes, then future uses of the binary search will be broken while that item is still in the position that matches its old name.

This issue has been around for quite some time. To completely fix this issue, each row object of the table would need to be immutable, at least on the sort criteria. We could fix this in the future if the *mostly correct* sorting behavior is not good enough. For now, the client can trigger a re-sort (e.g., by opening and closing the table) to fix the slightly out-of-sort data.

The likelihood of the sort being inconsistent now relates directly to how many changed items are in the table at the time of an insert. The more changed items, the higher the chance of a stale/misplaced item being used during a binary search, thus producing an invalid insert position.

This strategy is setup to mitigate the number of invalid items in the table at the time the inserts are applied. The basic workflow of this algorithm is:

 1) condense the add / remove requests to remove duplicate efforts
 2) process all removes first
    --all pure removes
    --all removes as part of a re-insert
 3) process all items that failed to remove due to the sort data changing
 4) process all adds (this step will fail if the data contains mis-sorted items)
    --all adds as part of a re-insert
    --all pure adds
 
Step 3, processing failed removals, is done to avoid a brute force lookup at each removal request.

This strategy allows for the use of client proxy objects. The proxy objects should be coded such that the hashCode() and equals() methods will match those methods of the data's real objects. These proxy objects allow clients to search for an item without having a reference to the actual item. In this sense, the proxy object is equal to the existing row object in the table model, but is not the same instance as the row object.