Discussion in 'Resources' started by AronTheGamer, Aug 21, 2014.
Just about everyone who uses a Map, uses a HashMap, and that's not how a HashMap works.
Imagine if you had 1 million entries. Your code would take, at the worst case scenario, o(n) time.
However, a HashMap, like the name suggests, uses the hashcode of keys to sort them. They're put in to 'buckets', and therefore when you request a value by key, it'll look up the hashcode, and get the correct bucket. From there, it'll be able to look up your value much faster.
Think of it like an actual dictionary, where the buckets are A-Z. When you look for the word 'Zoo', you don't start from 'A' and go all the way through. You start at 'Z' and go from there.
Is this some kind of a joke? If you want to learn how a HashMap is actually implemented + actually *fast*, https://github.com/AgentTroll/Bukki...dyc40/commons/collect/AbstractHashStruct.java
please return april first.
That is not a Map, it is a Set, or a Array structure ...
Your runtime to, get, remove check keys is linear, O(n)
In a Map you get, removs ans check keys in constant time, O(1) [best and average case, worst case also O(n)]
Try to insert 1.000.00 items in your "QuickMap" and then grab all reverse
Do the same with a Map, e.g. a HashMap.
If you messaure the time you see the different
You can try google,
I advice this book for the full details: http://mitpress.mit.edu/books/introduction-algorithms
If you want to learn: http://en.wikipedia.org/wiki/Data_structure
For Big O notation: http://en.wikipedia.org/wiki/Big_O_notation
Just make a binary search tree :3
Can handle millions of data and it's greatly faster than a HashMap.
The class is called TreeMap in the java library and is the same thing as a binary search tree.
marwzoor That only helps if your data has a defined natural order.
Not really, just compare the average time complexity of a binary search tree and a hash table. Basically, the average time it takes to look up or insert entries in a BST "grows like" (see the defintion of Big O) the natural logarithm of the total number of entries. In contrast, adding more entries to a hash table will not slow down these operations.
TreeMap does not generally perform better than HashMap as a Map, but it allows for quick lookup of the minimum and maximum entry, or the first entry just below or above a certain key. In those specific cases, a HashMap is inefficient.
Now the thread become a ressource thread xD
I think @Compenix and marwzoor both said the truth.
Generally a HashMap/table has a average/best case lookup time from O(1), also in big data.
A Tree has a lookup time from O(log(n)).
O(1) is faster then O(log n).
(I ignored the constant factors for each implementation).
The problem are the hashfunction and the collisions.
In big data the probability of collisions are higher and the lookup time increase.
On the other side, you must rebalance trees and you have a huge insert time ...
Hashtable is only the threadsafe (synchronized) variant of a Hashmap (and some other little changes, e.g. null keys not allowed),
You can access a HashMap from difference threads, what can increase the performance or cause problems xD
You can not generalize "Map is better then Tree, or reverse", it depend on the data and your usage.
I refer back to my post about data structure:
overview over runtimes: http://bigocheatsheet.com/
Separate names with a comma.