COLECTIONS
FRAMEWORK
=========================
Limitations of array
1) Fixes in in size
2) Homeogeneous elements only
3)No underline datastructue like soriting searching
4) wrt memory arrays not preferred
5) wrt performacne this is recoemmended
6_ cal hold primitve & objects
Collections
1)Growable in size
2) Can accot homogeneous &
heterogeneous
3)Standrd DS
4) wrt memory collections
5)wrt performance not recomended
6)can hold only objects no primitives
If we know the size in advance better to go
for Array as collections degrades performance
collection framework will hold interfaces
& classes required for collections
collection in c++ - container, STL Standard
template library
array to list
1.
Arrays.asList(item)
List to array
listObj.toArray(new String[listObj.size()])
9 key interfaces of collection framework:
1)collection
interface
·
provides most common methods
like add, remove, sort ,search empty etc..
·
Root interface of collection
framwork
·
There is no concrete class which
implement collection interface.
Collection
|
Collections
|
Interface
|
Class
|
used for grouping objects as single
entity
|
this is utility class present
java.util.package which will hold all
methids
|
2) List
interface:
·
It is child interface of Collection
interface
·
If we want to store grp of
elements which allow duplicates and insertion order is preserved.

3)
Set interface ::
·
It is child of Collection
interface
·
which represents grp of elements
which do not allow duplicates & insertion order is not preserved.

Differences
between List & Set ::::::
List
|
Set
|
Allows duplicates
|
Does not allow duplicates
|
Insertion order is maintained
|
No insertion order maintained
|
4) Sorted
Set interface:
·
Child interface of Set.
·
No duplicates, but there is
some sorting order followed.
5) Navigable
Set Interface:
·
Child interface of sorted set.
·
Defines certain methods for
navigation.

6) Queue Interface:
·
Child interface of Collection
interface.
·
If we want to represent group
of individual object sprior to processing we should go for queues.
Ex: Mailing list
Collection(I)
![]() |

![]() |



7)
Map Interface:
·
It is not child
interface of Collection interface.
·
Used to store key value pairs.
·
Keys have to be unique values
can be duplicate.

Hashtable extends Dictionary Abstract class.
8) Sorted Map:
·
Child interface of Map.
·
Sorting is based on the Key.
9) Navigable
map
·
Child interface of sorted map.
·
Defines several utilizes in
Treemap which defines sorting order.

Sorting is mainly used for Set & Map
For implementing sorting we shud implement these 2 interfaces ::
1)
Comparable --- Natural sorting
order
2)
Comparator I – Custom sorting
order
Cursors ::: to get collection obejcts one by one:
1)
Enumeration I
2)
Iterator I
3)
ListIterator I
Utility
class:
1)Collections
2)Arrays


Collection
Interface:::


There is no concrete class which implement
Collection interface. This interface does not contain any method to get /
retrieve objects.
List Interface::
Ex: Library rack which can have duplicate books and
books are picked based on rack no. -> book no. (index)
1)
Index play a very important
role.
2)
Child of Collection interface.
3)
Duplicate elements r identified
base don index.
4)
Insertion order is also
verified base don index.
.set() method is for replace
***Except Treeset & Tree map everywhr
heterogeneous objects are allowed.
ArrayList:
1.)
Growable in size.
2.)
Heterogeneous objects
3.)
Null values allowed
4.)
Insertion order preserved
5.)
Duplicates allowed.
After adding new item after defined size it
increases size by 50%
(Cc*3/2) +1
3 array list constructors are available
ArraList list = new ArrayList();
ArraList list = new ArrayList(10);
ArrayList list = new ArrayList(Collection
c);
Arraylist internally calls .toString – so
priting array with just arraylist name
works
System.out.println("This is full arraylist " + bookList);
Where bookList is arrayList obj
** All collection implement serializable
& cloneable
>>ArrayList implements Serializable
& cloneable interface since they are to be transferred over network and a
copy needs to eb kept
>>ArrayList & Vector implement
“Random Access” interface duw to which we can access any obj at the same speed.
>>RandomAccess is marker interface – a interface which
does not have any methods.

ArrayList is:
Best
choice --- if we have many retrive calls
Worst choice : if we want to add / remove element in the middle
1)How does the size of Arraylist increases
automatically? Could you share the code?
-> Arrays.copyOf(oldrray, length);
2)Linkedlist preferred when want to do
frequest insertion deletion -- der is no arrayOf concept you can traverse
through and insert at positions
3) While passing an ArrayList to a method
or returning an ArrayList from a method, when is it considered to be a security
violation? How to fix this problem?
Always
do Arrays.copyOf(old, length)
4)How do you copy one ArrayList to another?
Could you share the code?
constructor - ArrayList newList = new
ArraysList(oldLest)
Collections.copy(new, old)
5) How does the addition and deletion of an
object at any index happens in ArrayList? Is it expensive? Explain?
System.arrayCopy();

Arraylist is not synchronized we can make
it by usng the
Collections.synchronized(list);
Simillilarly we can use for set & map
asList will convert any array like string of
arrays to list
String[]
words =
{"ace", "boom", "crew", "dog", "eon"};
//Use
Arrays utility class
List
wordList = Arrays.asList(words);
Linked List:::
·
Best choice: when addition or
deletion at certain place is required.
·
Worst choice – retrieval
1.
Allows heterogeneous objects
2.
Preserves data insertion order
3.
Allows null values
4.
Allows duplicate
5.
Underlying Data structure is
doubly linked list
6.
Implements serializable, cloneable but not random access
interface
Stacks & ques –
addFirst
addLast()
removeFirst()
getFirst() etc
2 constructors:
Linkedlist lsr= new LinkedList();
linkedList lst = new linkedList(Collectin
c)

Vector::
1)
Allows heterogeneous
sobjects
2)
Allows duplicates
3)
Allows null
4)
Implement serializable &
cloneable
5)
Insertion order preserved
6)
Implemnets Random Access
7)
It is Synchronized (Thread safe)– only one thread can access at a time
hence performance degrade
8)
It’s a legacy class
9)
Since same as list best
operation is --- retrival
Since verctor is legacy clas s- it has got
very lengthy names
Vec.addElement(obj)
vec .removeElementAt() / or removeElemnt()
Object firstElement()
Object lastElement()
Capacity() – how many object sit can
accommodate
Size() – size of vector
After insertion of any obj vector size
increases by 2*size hanc ewe can handle this during vector initialization,
1)
Vector v = new Vector();
2)
Vector v = new Vector(10);
3)
Vector v = new Vector(Obj O);
4)
Vecntore v = new Vector(int
initial cap, int incremental cap)
New Vector(1000,
5);
Stack
1.
Child of vector
2.
Lifo order
Constructor:
Stack
s = new Stack();
s.peak() // Returns top of stack without
removing
s.pop() // Remove top of stack
s.push() // insert into stack
s.search //return offset from top to down
Cursors in java
·
Enumeration
·
Iterator
·
ListIterator
Enumeration:
Vectore
v = new Vector();
Enumeration en =
v.elemnts();
1.
This is applicable only for
legacy classes and hence vector or hash
table etc
2.
We can get only read access
& cant perform remove operation.
It has got 2
mothods:
Ø en.hasMoreElements();
Ø en.nextElement();
Limitations
Ø No remove
Ø Nly forward direction cursor
Iterator:
1)
Universal cursor.
2)
Read & remove operation
both possible.
Methods:
Ø Itr.hasNext();
Ø Iter.next();
Ø Itr.remove();
Limitations:
Ø Nly forward direction cursor
Ø Read & remove allowed not add or replace.
List
Iterator : Most powerful cursor
1)
Bidirectional cursor.
2)
Only for List classes.
3)
Read, remove, add, replace all
possible
Methods:
hasNext()
Next()
nextIndex();
hasPrevious();
previous();
previousIndex();
remove();
set(new obj);
add();

Set:
Ø No duplicates
Ø No new methods uses Collection interface methods
Ø If we want to add any duplicate, no compile time no run time simply
the add method returns false.
Ø The underlying datastructure is Hashtable
Ø Insertion order not preserved.as objects are inserted base don
hashcode.
Ø Heterogeneous
Ø Null
Ø Implement serializable & cloneable interfaces
Ø Search operation is fast based on hashcode
Hashset:
Constructors:
Ø HashSet h = new HashSet();
Creates hashset
with initial capacity 16 & fill ratio as 0.75 – fill ratio(load factor) is
rate at which the hashset will be increased
Ø HashSet h = new HashSet(15);
Ø HashSet h = new HashSet(Collection c);
Ø HashSet h = new HashSet(initial capacity, fill ratio);
After loading the Fill ratio / load factor
new Hashset obj will be created
Q2. What copy technique internally used by HashSet clone()
method ?
There are two copy techniques in every object oriented programming lanuage , deep copy and shallow copy.
To create a clone or copy of the Set object, HashSet internally uses shallow copy in clone() method , the elements themselves are not cloned . In other words , a shallow copy is made by copying the reference of the object.
There are two copy techniques in every object oriented programming lanuage , deep copy and shallow copy.
To create a clone or copy of the Set object, HashSet internally uses shallow copy in clone() method , the elements themselves are not cloned . In other words , a shallow copy is made by copying the reference of the object.
Linked
Hashset:
Ø Sub class of Hashset
Ø Insertion order preserved

LinkedSet is best choice for cache based applications where no
duplciates allowed and insertion order is preserved.
Sorted
Set:
This is interface!!!!!!!!!!
Ø Child interface of Set interface.
Ø No duplicates
Ø Sorting order.
Normal
set does not contain any new methods where as sorted set has below 6 new
methods

TREE SET

TreeSet Constructors:
1)
TreeSet T = new TreeSet() –
elements are inserted according to default sorting order
2)
Treese T – new TreeSet(Comparator T) –
Elements inserted according to customized sorting order.
3)
TreeSet T = new TreeSet(Collection C) –
4)
TreeSet T = new TreeSet(sortedSet S);
TREESET
alone cannot accept nullvalues, & heterogeneous data types
TreeSet
& TreeMap alone cannot accept heterpgeneous data types:
Null acceptance
Since Treeset always does comparison before inserting Null is
never accepted in the set which has elements.
If null is going to be inserted in the start then it will
accept.
Important Note
: TreeSet doesn’t use hashCode() and equals()
methods to compare it’s elements. It uses compare() (or compareTo()) method to
determine the equality of two elements.
In Treeset:::->

The elemnts tobe instrted into Treeset should always be
comparable comparable

Obj2.compareTo(obj1)
If(obj2 <obj1) - returns –ve
If<obj2>obj1) –
returns +ve
If(obj2==obj1) – returns 0


Comparable – natural
sorting order
Comparator – custom sorting
order


Comparable ---
obj1.compareTo(obj2) = default sorting
Comparator – compare(obj1,
obj2) , equals() === custom sorting
How do you sort a n object using some fields:::::::
1. Create a Student class with
required fields – rollNo, name, college name
2. Add another class which
will implement Comparator interface and hence override the method comparator()
which is in Comparator interface.
Class sortByRollNo implements Comparatorn<Stud>{
Public int compare(Student o1, Student
o2) {
Return(01.rollNo
– o2.rollNo);
}
}
3. Call Collections.sort(list,
new sortByRollNo());
If a
class can impelement comparable then use compareTo() method in the same class
If we
cannot modify class den write a diff class which will implement comparator
which will have compare() method which inturn return o1.compareTo(o2);



String
first sort by length then sort by alphabets





Comparable
interface
|
Comparator
interface
|
compareTo(Object
o1)
if(this.eid
< o1.eid) {
return
-1
}
|
Compare(Object
o1, Object o2)
O1.eid.compareTo(o2.eid);
|
For
ascending order
Obj1-obj2
= -ve
Obj1 – obj2
= +ve
For
descending order
Obj1 –
obj2 = +ve
Obj1 –
obj2 = -ve



HashMap*************
1)
Map is not child interface of collection.
2)
If we want to represent grp
of objects as key value pairs then we shud go for map.
![]() |
Key
|
Value
|
101
|
Sam
|
102
|
Ram
|
103
|
Sham
|
104
|
Tom
|
Entry
3)
Duplicate keys not allowed
but values can be duplicated.
4)
Each key val pair is called
entry hence map is considered as a collection of entry objects.
Map interface methods::
1) Object put (Object key,
Object value)
-
To add 1 key value pair to map
-
If the key is already present then old value will be replaced with new
value and returns old value.
Ex: m.put(101,
“Durga”) – null
m.put(102, “Ravi”) – null
m.put(101, “Shiva” ) – returns old value –
Durga
2)
void putAll(Object key, Object val);
3)
Object get(Object key);
4)boolean
containsKey(Object key);
5)Boolean
containsValue(Object value);
6)Object
remove(Object key);
7)Boolean
isEmpty()
8)void
clear
9)int
size();

Collection views of map
1) Set keyset() – gets the
keys
2) Collection values() – gets
values
3) Set entrySet () – get the entry by rows
Entry Interface:
Entry interface is inside map interface
Interface map {
Interface entry {
Object
getKey();
Object
getValue();
Object
setValue(Object new);
}
}
A map is a group of key value pairs
& each key value pair is called an entry
A
map is considered as collection of entry objects. Without existing map object
there is not chance of existing entry object hence entry interface is defined
inside map interface.
HashMap
1)
The underslying DS is
Hashtable.
2)
Insertion order is not
preserved & it is based on hashcode of keys
3)
Duplicate keys are not
allowed but values can be duplicated.
4)
Heterogeneous objects are
allowed for both key & value.
5)
Null is allowed only once
for key & multiple times for values.
6)
Hashmap implements
serializable & clonable interfaces but not Random access interface.
7)
Haskmap is the best choice
if our freq operation is search operation.
>Hashmap m = new Hashmap(); -
creates an empty hashmap object with default initial capacity 16 & the
default fill ratio 0.75.
>Hashmap m = new Hashmap(int
initialCapacity) – creates an empty hashmap obj with specified initial capacity
& def fr 0.75
>Hashmap m = new Hashmap(int
initial capacity, float fillratio);
>Hashmap m = new Hashmap(map m);


Hashmap
|
Hashtable
|
Non synchronized
|
Synchronized
|
Not thread safe
|
Thread safe only one thread access
at a time
|
Performance is fast
|
Performance degraded
|
Non legacy
|
Legacy
|
Allows null key & null values
|
Does not allow – NPE thrown
|
How to get synchronized version of hashmap object: By default hash map is non synchronized but we can
get synchronized version of hashmap by
usig synchronized map method of collections class
Hashmap m = new HashMap();
//to synchronize
Map m1 =
Collections.synchronizedMap(m);
Linked Hashmap
It
is child class of HashMap. It is exactly same as Hashmap including methods
& constructros except following differences
HashMap
|
LinkedHashMap
|
Datastructure
= Hashtable
|
DS
= Hashtable+Linked List (Hybrid)
|
Insertions
order not preserved & it is based on hashcode of keys
|
Insertion
order preserved
|
Introduced
in 1.2 ver
|
Introduced
in 1.4 ver
|
|
|
If we replace hashmap wid
linked\HashMap insertion order is preserved. Liked hashset & linked hashmap
are used for cache based aplications
Difference between == & .equals()
In
general == operatir meant for reference comparison (addr comparison) where as
.equals() meant for content comparison
Integer
i1 = new Integer(10);
Interger
i2 = new Integer(10);
I1==i2
/// false
I1.equals(i2)
// true
IdentityHashmap
It
is exactly same as HashMap (including methods & constructors) except the
following difference.
In the case of normal hashmap JVM will
use .equals() to identify duplicate keys which is meant for content comparison.
But in case of identity hashmap, JVm will use == operator to identify duplicate
keys which is meant for reference comparison (address comparison)
HashMap
m = new HashMap();
Integer
i1 = new Integet(10);
Integer
i2 = new Integet(10);
m.put(i1,
“Pavan”);
m.put(i2,
“Kalyan”);
m
= {10=Kalyan}
i1
& i2 are duplicate keys. Because i1.equals(i2) returns true. If we replace
HashMap with IdentityHashMap then i1 & i2 are not duplicate keys because
i1==i2 returns false. In this case output is
m = {10=Pavan , 10=Kalyan}
WeakHashMap
It
is exactly same as HashMap except the following difference
In the case of HashMap even though the
obj doesn’t have reference it is not eligible for GC if it is associated with
HashMap. Ie. HashMap dominates garbage collector.
In case of Weak hashmap if obj doesn’t
contain any references it is eligible for GC though object associated with weak
HashMap. Ie Garbage collector dominates Weak hashmap.


In
the above ex. Temp obj not eligible for GC cuz its associated with Hashmap
In
this output is
{Temp=durga}
In
the above program if we replace Hashmap with weak hashmap then temp obj is
eligible for GC. In this case output is
{temp=durga}
Finalize
method called
{}
Sorted MAP (I)
It
is the child interface of Map If we want to represent grp of key value pairs
according to some sorting order of keys then we shud go for sorted map. Sorting is base don the key
But
not base don value.
Sorted
map defines the following specific methods


TreeMap
1)The
underlying datastructure is Red Black Tree
2)
Insertion order is not preserved & it is based on some sorting order of
keys.
3)
Duplicate keys are not allowed, duplicate values are allowed.
4)
If we are depending on default natural sorting order then keys should be
homogeneous & comparable otherwise we will get run time exception saying
classcast exception
If
we are defining our own sorting by comparator then keys need not be homogeneous
& comparable. We can take heterogeneous non comparable objects also.
5)Whether
we are depending on default natural sorting order / customized sorting order
there are no restritcitons for values. We can take heterogeneous non comparable
objects also.
Null Acceptance
1)
For non empty treemap if we are trying to
insert an entry with null key then we will get run time exception saying null
pointer exception.
2)
For empty treemap at the 1st entry
with null key is allowed. But after inserting that entry if we r trying to
insert null key we will get null pointer exception.
NOTE::: The abpve null acceptance rule applicable until 1.6 V
only. From 1.7 ver onwards null is not allowed for key in Treepmap. But for
values we can use null any no. of times.. there si no restriction on version.
Constructors:
TreeMap
T = new Treemap(); // default sorting order
TreeMap
T = new TreeMap(Comparator) //
custom sorting order
TreeMap
T = new TreeMap(Map p) // for a
map create equivalent treemap
TreeMap
T = new TreeMap(SortedMap M) //
For any sorted map equivalent treemap

Using Comparator

Hashtable
1) The underlying
Datastructure for HashTable is HashTable
2) Insertion order is not
preserved & it is based on HashCode of keys.
3) Duplicate keys are not
allowed & values can be duplicated.
4) Heterogeneous objects are
allowed for both keys & values.
5) Null Is not allowed for
both key & value otherwise vil get run time null pointer exception – this
came v1.6
6) It implements serializable,
clonable but not random access
7) Every method present in
Hashtable is Synchronized & hence HashTable obj is thread safe.
8) HashTable is the best
choice if our frequent operation is search operation.
Default initial capacity of hashtable ===11
Constructors
Hashtable
ht = new Hashtable(); //default fill ratio0.75
Hashtable
ht = new Hashtable(int initialCpacity); //
Hashtable
ht = new Hashtable(int initialCapacity, float fillRatio);
Hashtable
ht = new Hashtable(Map m);
h.put(new Temp(5), "A");
h.put(new Temp(2), "B");
h.put(new Temp(6), "C");
h.put(new Temp(15), "D");
h.put(new Temp(23), "E");
h.put(new Temp(16), "F");
15%11 = 4
23%11 = 1
16%11 = 5
10
|
9
|
8
|
7
|
6 6=C
|
5 5=A, 16=F
|
4 15=D
|
3
|
2 2=B
|
1 23=E
|
0
|
Reading is always
top -> down
right
-> left
6=C, 16=F,5=A,15=D, 2=B, 23=E
When collision occurs we can solves by either chaining & open addressing
Chaining --- if mod return same
result for more than one element .. elemtns are added as in linked list form
one beside another
Open addressing – if block Is occupied
element is copied to the next free block available.
Even though hashmap uses hashtable daastructure but
internally it uses array for
representing bucket and adding elements in case of collision it uses linkedlist


If we change hashCode() of
temp class as
Public int hashCode() {
Return i%9;
}

So if we
change hashCode formula automatically der is change in our output
If we configure initial capacity as 25 ie.
Hashtable h = new Hashtable(25);

Properties
In our prgrm anything which
changes frequently like username pwd mobile no etc are nto recommended to
hardcode in java files. We can overcome this problem by using properies file. In normal map like hashmap, hash table treemap
key & value can be anytype. In the case of properties key & value shud
be string type
Constructor:
Properties p = new
Properties();
Methods:



QUEUE
1.5 Ver enhancements (Queue
interface)
It is the child interface
of Collection.

If we want to represent a
grp of individual objects prior to processing then we shud go fr queue.
Ex: Before sending SMS
message all mobile no.s we have to store in some data structure. In which order
we added mobile nums in the same order only message shud be sent. For this 1st
in 1st out requirement queue is the best choice. Usually queue follow FIFO order but base don
our req we can implement our own priority order also(Priority queue)
From 1.5
Ver onwards LinkedList class also implements Queue Interface
Linked list base
implementation of queue always follows FIFO.
Queue Interface specific methods
1) Offer(Object o) – add an
obj to queue
2) Poll() ---- remove head
element of queue – If queue empty
returns null
3) Remove() – remove head
element – if queue is empty Null
pointer run time exception.
4) Peek() – ge head element --- return null if not found
5) Element() - ge head element --- return null if not found
Priority Queue
1) If we want to represent a
grp of inidivual ibjects prior to processing according to some priority then we
should go for priority queue.
2) The priority can be either
default natural sorting order / custom sroting order defined by comparator
3) Insertion order is not
preserved & it is based on some priority.
4) Dupl objects are not
allowed.
5) If we are depending on
default natural sorting order compulsory the objects shud be homogeneous &
comparable otherwise we will get runtime exception saying classcast exception
6) If we are defining our own
sorting by comparator then objects need not be homogeneous & comparable.
7) Null not allowed even as 1st
element also.

PriorityQueue
q =new PriorityQueue() // Creates
empty priority queue with def initial capacity 11 & all objects will be
inserted according to default natural sorting order
PriorityQueu
q= new Priority(int initialCapacity)
PriorityQueu
q= new Priority(int initialCapacity, Comparator c) //comparator will never come alone as parameter in queue
PriorityQueu
q= new Priority(SortedSet s);
PriorityQueu
q= new Priority(collection c)

Note: Some platforms wont provide proper
support for thread priorities & priority queue.
Custom
priority

1.6 Ver enhancements in collection
framework
As part of 1.6Ver following concepts
introduced in
·
NavigableSet
·
NavigableMap
NavigableSet
It
is the child interface of SortedSet introduced in 1.6Ver which defines several
methods for navigation purposes.

It
defines the following methods::::
Within
range:
floor(e) // return
<= element
lower(e) // returns
< elemnt
ceiling(e) //returns
>= e
higher(e) // returns
> e
pollFirst() // remove
& return 1st element
pollLast() //remove
& return last element
descendingSet() //
returns navigable set in decsening order

Navigable Map
NavigableMap
is the child interface of sorted map. It defines sevral methods for navigation
purposes.

NavigableMap
defines the following methods::


Blocking
queue , linked blocking que need to be added
Collections::
The
Colletions class defines several utility methods for collection objects like
sorting searching revering etc..
Collections
class defines the following 2 sort methods
1) Public static void sort
(List l) //To sort based on
default natural sorting order
Ø List should compulsory
contain homogeneous & comparable objects otherwise we will get runtime
classcastException.
Ø List should not contain
null or else we will get NPE.
2) Public static void
sort(List l, Comparator c) //
customized sorting
Default
sorting::

Custom sorting:::


Searching
Elements of List:::::
Internally
uses binary search
Colletctions
class defines the following binary search methods:
1) Public static int
binarySearc(list l, Object traget)
Ø If the list is sorted
according to default natural sorting order den we have to use this method.
2) Public static int
bunarySearch(list l, Object target, Comparator c)
Ø We have t use hti smehtod
if the list is sorted according to customized sorting order.
Conclusions:
Ø The above search methods
internally willuse binary search algorithms.
Ø Successful search returns
index
Ø Unsuccessful search returns
insertion point
Ø Insertion point is the
location where we can place target elemnt in sorted list.
Ø Before calling Binarysearch
method the list shud be always sorted or else vil get unexpectable results.
Ø If the list is sorted
according to comparator at the time of search operation also we have to pass
same comparator obj. otherwise we will get unpredictable results.


Collections.unmodifiableCollection(l);
// you cannot modify the collection object


Reversing Elements of List
Public static void reverse(list l)

reverse() --- reverse order of elements of list
reverseOrder è reverse of
Comparator object
Comparator c1 = Collections.reverseOrder(Comparator c);

::::::Arrays::::::
Is a
utility class to define several utility methods for Arrays / array objects
1)
Sorting Elements of Arrays::
Arrays class defines the following sort
methods to sort primitive & obj type arrays.
Public static void sort(primitiveArray p)
////Default Sorting order
Public static void sort(Object o) /// sort
objects DNSO
Public static void sort(Objects[] o,
Comparator c) // Objects array for custom sorting



We can sort primitive arrays only based on
default natural sorting order. Where as we can sort obj arrays either based on
default natural sorting order or customized sorting order.
Searching elements of array:::::::
Arrays class defines following binary search
methods.

All rules of arrays class binary search
method are exctly same as collections.Binary search method.


Public static list asList(Array[]
a)
Strictly
speaking this method wont create an independent list obj. For this existing
array we are getting listview

1) No
separate obj!!!!!!!!!
2) By
using array reference if we perform any change dat change will be reflected to
the array. Simillarly by using List reference if we pwerform any change dat
change will be reflected automatically to the array.
::::::****Concurrent Collections****::::::
Need of concurrent collections::
1)
The collection objects we have are not thread safe or synchronized except
Hashtable & Vector.
2)
Even if we say the method as
Collections.SynchronizedList(l) – only 1 thread can access the object at t time
which will impact performance.
3)
If a thread is accessing the object and
simultaneously if another thread is modifying the same object we will get ConcurrentModificationException();
Fails safe iterators :::: do not throw any
exception if there is any concurrent modififcation happened when iterating over
it.
Fail fast iterators ::: These throw COncurrentmodificationexception if there is any
concurrent modification when iterating the collection.

instanceOf()
– compares supe & subclass
getClass()
compares exact claases
How do you avoid NullPointerException, while comparing two Strings
in Java?
Since when compared to null, equals return false and doesn't
throw NullPointerException, you can
use this property to avoid NPE while using comparing String. Suppose you have a
known String "abc" and you are comparing with an unknown String variable str,
then you should call equals as "abc".equals(str), this will not throw Exception in thread
Main: java.lang.NullPointerException, even if str is
null. On the other hand, if you call str.equals("abc"), it will
throw NPE. So be careful with this. By the way this is one of the Java coding
best practices, which Java developer should follow, while using equals() method.
Read more: https://javarevisited.blogspot.com/2013/08/10-equals-and-hashcode-interview.html#ixzz5vb4CPkAQ
When
overriding equals method we should consider
Ø Use this
comparison --- if obj == this this is the fastest comparison we have
If(obj == this) à return
true;
Ø If object
is not null and is of same type
If(obj == null) || this.getClass != obj.getClass() è return false;
If objects are not of same type do typecasting
Ø Compare
the attributes – 1st numeric attr check & null hcekc to avoid
NPE
Person p = new Object;
Return
(p.id
== this.id)
&&
(this.firstName!=null &&
this.firstName.equals(p.firstName))
&&
(this.lastName!=null && this. lastName.equals(p.
lastName))
When overriding hascode() consider
Ø If two objects are equal by using equals method
they should return the same hashcode.
Ø Whenever we invoke haschode() method on an
object multiple times it should return same value each time.