Ques)Difference between
AnnotationSessionFactoryBean and LocalSessionFactoryBean?
Ans:- AnnotationSessionFactoryBean which is not
available in Hibernate 4. As a part of migration process of your Hibernate 3
application to Hibernate 4 you have to make necessary changes in your
configuration files.
If you use the following
code in your Hibernate 4 application it will give you the error. Code used in
Spring 3:
<bean
id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"/>
Application may give the
following error: nested exception is
java.lang.NoClassDefFoundError: Lorg/hibernate/cache/CacheProvider; So, you have to use the
org.springframework.orm.hibernate4.LocalSessionFactoryBean instead of
AnnotationSessionFactoryBean. So this change is required as a process of
migration to Hibernate 4.
In the Hibernate 4 the CacheProvider-related
interfaces and classes has been removed. Now the RegionFactory related cache interfaces are available for secondary
level caching.
Here is the complete code of using the
LocalSessionFactoryBean instead of AnnotationSessionFactoryBean in Hibernate 4
applications:
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
</bean>
<!-- Enables annotation based transactions -->
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
</bean>
<!-- Enables annotation based transactions -->
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<!--
Initializes hibernate session factory -->
<bean
id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
</bean>
In Spring config file ,you can write
multiple xml config file as following if you want to separate each
configuration:-
<import resource="marketSegmentGroup/marketSegmentGroup-context.xml"
/>
org.hibernate.Transaction.commit()
and org.hibernate.Session.flush()
Is it good practice to call org.hibernate.Session.flush()
method by hand? As said in org.hibernate.Session docs,
Must be called at the end of a unit of work, before
committing the transaction and closing the session (depending on flush-mode,
Transaction.commit() calls this method).
Explain purpose of
calling org.hibernate.Session.flush() by hand if org.hibernate.Transaction.commit()
will call it automatically?
In the Hibernate Manual you can see this example
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer
= new Customer(.....);
session.save(customer);
if ( i % 20 == 0
) {
//20, same
as the JDBC batch size
//flush a
batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
Without the call to the flush method, your first-level
cache would throw an OutOfMemoryException
One common case for explicitly flushing is when you create
a new persistent entity and you want it to have an artificial primary key
generated and assigned to it, so that you can use it later on in the same
transaction. In that case calling flush would result in your entity being given
an id.
Another case is if there are a lot of things in the
1st-level cache and you'd like to clear it out periodically (in order to reduce
the amount of memory used by the cache) but you still want to commit the whole
thing together.)
ArithmeticException:
“Non-terminating decimal expansion; no exact representable decimal result”
Why does the following code raise the exception shown
below?
BigDecimal a = new BigDecimal("1.6");
BigDecimal b = new BigDecimal("9.2");
a.divide(b) //java.lang.ArithmeticException:
Non-terminating decimal expansion
No exact representation decimal result.
When a MathContext object is supplied with a precision
setting of 0 (for example, MathContext.UNLIMITED), arithmetic operations are
exact, as are the arithmetic methods which take no MathContext object. (This is
the only behavior that was supported in releases prior to 5.)
As a corollary of computing the exact result, the rounding
mode setting of a MathContext object with a precision setting of 0 is not used
and thus irrelevant. In the case of divide, the exact quotient could have an
infinitely long decimal expansion; for example, 1 divided by 3.
If the quotient has a nonterminating decimal expansion and
the operation is specified to return an exact result, an ArithmeticException is
thrown. Otherwise, the exact result of the division is returned, as done for
other operations.
a.divide(b, 2, RoundingMode.HALF_UP)
where 2 is precision and RoundingMode.HALF_UP is rounding
mode
Because you're not specifying a precision and a
rounding-mode. BigDecimal is complaining that it could use 10, 20, 5000, or
infinity decimal places, and it still wouldn't be able to give you an exact
representation of the number. So instead of giving you an incorrect BigDecimal,
it just whinges at you.
However, if you supply a RoundingMode and a precision, then
it will be able to convert (eg. 1.333333333-to-infinity to something like
1.3333 ... but you as the programmer need to tell it what precision you're
'happy with'.
a.divide(b, MathContext.DECIMAL128)
You can choose the number of bits you want either
32,64,128.
Many of our applications
use jars that are not available from the Maven repository. In order to make
such jars available through Maven , we can add them to the local repository
using the following maven command:
mvn install:install-file
-DgroupId=groupId -DartifactId=artifactId -Dversion=version -Dpackaging=jar
-Dfile=/path/to/file.jar
NOTE:
If you get the error
“Error creating shaded jar: Invalid signature file digest for Manifest main
attributes” when compiling a shaded jar using maven, add the following to that
project’s pom.xml inside the configuration section of the shade plugin:
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
If this error persists,
also ensure that any in-house dependencies that use the shade plugin also
contain the above configuration.
Caused by: org.springframework.context.annotation.ConflictingBeanDefinitionException:
Annotation-specified bean name 'taskTypeTranslationDao' for bean class
[com.tss.in.web.resource.monitor.test.dao.impl.TaskTypeTranslationDaoMockImpl]
conflicts with existing, non-compatible bean definition of same name and class
[com.tss.in.web.resource.monitor.dao.impl.TaskTypeTranslationDaoImpl]
with two jar
libraries (app1 and app2) in one project. The bean "BeanName" is
defined in app1 and is extended in app2 and the bean redefined with the same
name.
In app1:
package com.foo.app1.pkg1;
@Component("BeanName")
public class Class1 { ... }
In app2:
package com.foo.app2.pkg2;
@Component("BeanName")
public class Class2 extends Class1 { ... }
This causes
the
ConflictingBeanDefinitionException
exception in the loading of the applicationContext due to the same
component bean name.
To solve this problem, in the Spring
configuration file applicationContext.xml:
<context:component-scan base-package="com.foo.app2.pkg2"/>
<context:component-scan base-package="com.foo.app1.pkg1">
<context:exclude-filter type="assignable" expression="com.foo.app1.pkg1.Class1"/>
</context:component-scan>
So the Class1 is excluded to be
automatically component-scanned and assigned to a bean, avoiding the name
conflict.
Use of
Collator class
Ø java.text
The Collator class
performs locale-sensitive String comparison. You use this class to
build searching and sorting routines for natural language text. The following example
shows how to compare two strings using the
Collator
for the default locale. // Compare two strings in the default locale
Collator myCollator = Collator.getInstance();
if( myCollator.compare("abc", "ABC") < 0 )
System.out.println("abc is less than ABC");
else
System.out.println("abc is greater than or equal to ABC");
You can set a
Collator
's strength property to determine the level of
difference considered significant in comparisons. Four strengths are provided: PRIMARY
, SECONDARY
, TERTIARY
, and IDENTICAL
. The exact assignment of strengths to
language features is locale dependant. For example, in Czech, "e" and
"f" are considered primary differences, while "e" and
"Ä›" are secondary differences, "e" and "E" are
tertiary differences and "e" and "e" are identical. The
following shows how both case and accents could be ignored for US English.
//Get the Collator for US English and set its
strength to PRIMARY
Collator usCollator = Collator.getInstance(Locale.US);
usCollator.setStrength(Collator.PRIMARY);
if( usCollator.compare("abc", "ABC") == 0 ) {
System.out.println("Strings are equivalent");
}
You use the compare
method when performing
sort operations. The sample program called CollatorDemo
uses the compare
method to sort an
array of English and French words. This program shows what can happen when you
sort the same list of words with two different collators:Collator fr_FRCollator = Collator.getInstance(new Locale("fr","FR"));
Collator en_USCollator = Collator.getInstance(new Locale("en","US"));
The method for sorting, called sortStrings
, can be used with any Collator
. Notice that the sortStrings
method invokes the compare
method:public static void sortStrings(Collator collator, String[] words) {
String tmp;
for (int i = 0; i < words.length; i++) {
for (int j = i + 1; j < words.length; j++) {
if (collator.compare(words[i], words[j]) > 0) {
tmp = words[i];
words[i] = words[j];
words[j] = tmp;
}
}
}
}
The English Collator
sorts the words as
follows:peach
péché
pêche
sin
According to the collation rules of the French language,
the preceding list is in the wrong order. In French péché should follow pêche
in a sorted list. The French Collator
sorts the array of words
correctly, as follows:peach
pêche
péché
sin
What will happen if
two different objects have same hashcode .
According to it if two objects has same hashcode both will
be stored in LinkedList but as far as I know if two hashcode then previous one
will get overridden with new one.
Can someone please put more light on how hashmap use object
as key internally and what will happen if two objects has same hashcode and how
both objects will be fetched with get()?
Working of put
method:
HashMap works on principle of hashing, we have put() and
get() method for storing and retrieving object form HashMap. When we pass an
both key and value to put() method to store on HashMap , it uses key object
hashcode() method to calculate hashcode and they by applying hashing on that
hashcode it identifies bucket location for storing value object. While
retrieving it uses key object equals method to find out correct key value pair
and return value object associated with that key. HashMap uses linked list in
case of collision and object will be stored in next node of linked list. Also
HashMap stores both key+value tuple in every node of linked list
Working of get
method:
When we pass Key and Value object to put() method on Java
HashMap, HashMap implementation calls hashCode method on Key object and applies
returned hashcode into its own hashing function to find a bucket location for
storing Entry object, important point to mention is that HashMap in Java stores
both key and value object as Map.Entry in bucket. If more than one Entry object
found in the bucket then it will call equals method of each node in same
bucket.
Since hashcode is same, bucket location would be same and
collision will occur in HashMap, Since HashMap use LinkedList to store object,
this entry (object of Map.Entry comprise key and value) will be stored in
LinkedList.
What happens On
HashMap in Java if the size of the HashMap exceeds a given threshold defined by
load factor ?
Java HashMap re-sizes itself by creating a new bucket array
of size twice of previous size of HashMap
Load factor of HashMap is 0.75 it will act to re-size the
map once it filled 75%.
Did you seen any
problem with resizing of HashMap in Java about multiple thread accessing the
Java HashMap and potentially looking for race condition on HashMap in Java?
Yes there is potential race condition exists while resizing
HashMap in Java, if two thread at the same time found that now HashMap needs
resizing and they both try to resizing on the process of resizing of HashMap in
Java, the element in bucket which is stored in linked list get reversed in
order during their migration to new bucket because Java HashMap doesn’t append
the new element at tail instead it append new element at head to avoid tail
traversing. If race condition happens then you will end up with an infinite
loop.
Null key is handled specially in HashMap, there are two
separate method for that putForNullKey(V value) and getForNullKey().
equals() and hashcode() method are not used in case of null
keys in HashMap.
HashMap Changes in JDK 1.7 and JDK 1.8
There is some performance improvement done on HashMap and
ArrayList from JDK 1.7, which reduce memory consumption. Due to this empty Map
are lazily initialized and will cost you less memory.
Converting get() method to perform in O(n) instead of O(1)
and someone can take advantage of this fact,
Java now internally replace linked list to a binary tree
once certain threshold is breached.
This ensures performance or order O(log(n)) even in worst
case where hash function is not distributing keys properly.
Adding a new
key-value pair
Calculate hashcode for the key
Calculate position hash % (arrayLength-1)) where element
should be placed (bucket number)
If you try to add a value with a key which has already been
saved in HashMap, then value gets overwritten.
Otherwise element is added to the bucket. If bucket has
already at least one element - a new one is gets added and placed in the first
position in the bucket. Its next field refers to the old element.
Deletion:
Calculate hashcode for the given key
Calculate bucket number (hash % (arrayLength-1))
Get a reference to the first Entry object in the bucket and
by means of equals() method iterate over all entries in the given bucket.
Eventually we will find correct Entry. If desired element is not found - return
null
What put() method actually does:
Before going into put() method’s implementation, it is very
important to learn that instances of Entry class are stored in an array.
HashMap class defines this variable as:
Step1- First of all, key object is checked
for null. If key is null, value is stored in table [0] position. Because hash
code for null is always 0.
Step2- Then on next step, a hash value is calculated using key’s hash code by calling its hashCode() method. This hash value is used to calculate index in array for storing Entry object. JDK designers well assumed that there might be some poorly written hashCode() functions that can return very high or low hash code value. To solve this issue, they introduced another hash() function, and passed the object’s hash code to this hash() function to bring hash value in range of array index size.
Step3- Now indexFor(hash, table.length) function is called to calculate exact index position for storing the Entry object.
Step4- Here comes the main part. Now, as we know that two unequal objects can have same hash code value, how two different objects will be stored in same array location [called bucket].
Answer is LinkedList. If you remember, Entry class had an attribute “next”. This attribute always points to next object in chain. This is exactly the behavior of LinkedList.
So, in case of collision, Entry objects are stored in LinkedList form. When an Entry object needs to be stored in particular index, HashMap checks whether there is already an entry?? If there is no entry already present, Entry object is stored in this location.
If there is already an object sitting on calculated index, its next attribute is checked. If it is null, and current Entry object becomes next node in LinkedList. If next variable is not null, procedure is followed until next is evaluated as null.
What if we add another value object with same key as entered before. Logically, it should replace the old value. How it is done? Well, after determining the index position of Entry object, while iterating over LinkedList on calculated index, HashMap calls equals method on key object for each Entry object. All these Entry objects in LinkedList will have similar hash code but equals () method will test for true equality. If key.equals(k) will be true then both keys are treated as same key object. This will cause the replacing of value object inside Entry object only.In this way, HashMap ensure the uniqueness of keys.
Step2- Then on next step, a hash value is calculated using key’s hash code by calling its hashCode() method. This hash value is used to calculate index in array for storing Entry object. JDK designers well assumed that there might be some poorly written hashCode() functions that can return very high or low hash code value. To solve this issue, they introduced another hash() function, and passed the object’s hash code to this hash() function to bring hash value in range of array index size.
Step3- Now indexFor(hash, table.length) function is called to calculate exact index position for storing the Entry object.
Step4- Here comes the main part. Now, as we know that two unequal objects can have same hash code value, how two different objects will be stored in same array location [called bucket].
Answer is LinkedList. If you remember, Entry class had an attribute “next”. This attribute always points to next object in chain. This is exactly the behavior of LinkedList.
So, in case of collision, Entry objects are stored in LinkedList form. When an Entry object needs to be stored in particular index, HashMap checks whether there is already an entry?? If there is no entry already present, Entry object is stored in this location.
If there is already an object sitting on calculated index, its next attribute is checked. If it is null, and current Entry object becomes next node in LinkedList. If next variable is not null, procedure is followed until next is evaluated as null.
What if we add another value object with same key as entered before. Logically, it should replace the old value. How it is done? Well, after determining the index position of Entry object, while iterating over LinkedList on calculated index, HashMap calls equals method on key object for each Entry object. All these Entry objects in LinkedList will have similar hash code but equals () method will test for true equality. If key.equals(k) will be true then both keys are treated as same key object. This will cause the replacing of value object inside Entry object only.In this way, HashMap ensure the uniqueness of keys.
How get() methods works internally
Now we have got the idea, how key-value pairs are stored in HashMap. Next big question is: what happens when an object is passed in get method of HashMap? How the value object is determined?
Answer we already should know that the way key uniqueness is determined in put() method , same logic is applied in get() method also. The moment HashMap identify exact match for the key object passed as argument, it simply returns the value object stored in current Entry object.
If no match is found, get() method returns null.
Now we have got the idea, how key-value pairs are stored in HashMap. Next big question is: what happens when an object is passed in get method of HashMap? How the value object is determined?
Answer we already should know that the way key uniqueness is determined in put() method , same logic is applied in get() method also. The moment HashMap identify exact match for the key object passed as argument, it simply returns the value object stored in current Entry object.
If no match is found, get() method returns null.
Rules of Method Overriding in Java
Following are rules of
method overriding in java which must be followed while overriding any method.
As stated earlier private, static and final method cannot be
overridden in Java but you can overload
static, final or private method in Java.
1. Method signature must be same including return type, number
of method parameters, type of parameters and order of parameters
2. Overriding method can not throw higher Exception than original or overridden method.
means if original method throws IOException than overriding method
can not throw super class of IOException e.g. Exception but it can throw any sub class of
IOException or simply does not throw any Exception.
This rule only applies to checked Exception in Java, overridden method is free to
throw any unchecked Exception.
3. Overriding method can not reduce accessibility of
overridden method , means if original or overridden method is public than overriding method can not make it
protected.
covariant return which is added in Java 5 in the case of
method overriding. When a subclass wants to change the method implementation of
an inherited method (an override), the subclass must define a method that
matches the inherited version exactly. Or, as of Java 5, you're allowed to
change the return type in the overriding method as long as the new return type
is a subtype of the declared return type of the overridden (super class)
method. Let's look at a covariant return in action:
class
Alpha {
Alpha doStuff(char c) {
return new Alpha();
}
}
class
Beta extends Alpha {
Beta doStuff(char c) { // legal override in Java 1.5
return new Beta();
}
}
ConcurrentHashMap
and other concurrent collections
HashMap implementation calls hashCode method on Key object
and applies returned hashcode into its own hashing function to find a bucket
location for storing Entry object, important point to mention is that HashMap
in Java stores both key and value object as Map.Entry in bucket.
In case of HashMap collision resolution methods available
like linear probing and chaining.
get() method and then HashMap uses Key Object's hashcode to
find out bucket location and retrieves Value object but then you need to remind
him that there are two Value objects are stored in same bucket , so they will
say about traversal in LinkedList until we find the value object , then you ask
how do you identify value object because you don't have value object to compare ,Until they know
that HashMap stores both Key and Value
in LinkedList node or as Map.Entry they won't be able to resolve this issue and
will try and fail.
But those bunch of people who remember this key information
will say that after finding bucket location ,we will call keys.equals() method
to identify correct node in LinkedList and return associated value object for
that key in Java HashMap . Perfect this is the correct answer.
Below query trim the white space heading and
tailing
UPDATE customer_details SET margin=TRIM( margin );
UPDATE customer_details SET sales=TRIM( sales );
UPDATE customer_details SET service=TRIM( service );
Possible alternative
In
PostgreSQL 8.4 and later, it's possible to create a CONCAT function which
behaves the same as MySQL's.
CREATE FUNCTION CONCAT( VARIADIC ANYARRAY )
RETURNS TEXT
LANGUAGE SQL
IMMUTABLE
AS $function$
SELECT array_to_string($1,'');
$function$;
Note
that the above may sometimes become confused about data types:
ERROR: could not determine polymorphic
type because input has type "unknown"
The
alternative is creating a CONCAT function for each reasonable data type (TEXT,
VARCHAR, INTEGER).
JobDetailBean vs
MethodInvokingJobDetailFactoryBean
<bean
id="sqlUpdateTaskJob"
class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property
name="targetObject" ref="SchedulerService" />
<property
name="targetMethod" value="executeSecondTask" />
</bean>
<bean
name="sqlUpdateTaskJob"
class="org.springframework.scheduling.quartz.JobDetailBean">
<property
name="jobClass"
value="com.tss.in.web.resource.monitor.scheduler.job.RMSchedulerJob"
/>
<property name="jobDataAsMap">
<map>
<entry
key="sqlUpdateListForUpcomingMonth"
value-ref="sqlUpdateListForUpcomingMonth" />
</map>
</property>
</bean>
Configuration in spring context file as follow:
<bean name="sqlUpdateTaskJob"
class="org.springframework.scheduling.quartz.JobDetailBean">
<property
name="jobClass"
value="com.tss.in.web.resource.monitor.scheduler.job.RMSchedulerJob"
/>
<property name="jobDataAsMap">
<map>
<entry
key="sqlUpdateListForUpcomingMonth"
value-ref="sqlUpdateListForUpcomingMonth" />
</map>
</property>
</bean>
<bean
id="SecondSimpleTrigger"
class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail"
ref="sqlUpdateTaskJob" />
<property
name="cronExpression" value="0/12 * * * * ?" />
</bean>
<!-- Scheduler
-->
<bean
class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property
name="jobDetails">
<list>
<ref bean="runRMSchedulerJob"
/>
<ref
bean="sqlUpdateTaskJob" />
</list>
</property>
<property
name="triggers">
<list>
<ref
bean="cronTrigger" />
<ref
bean="cronTrigger" />
</list>
</property>
</bean>
Initialize the object sqlUpdateTaskJob like trackingListTask
public class RMSchedulerJob extends QuartzJobBean {
private TrackingListTask trackingListTask;
public void
setTrackingListTask(TrackingListTask trackingListTask) {
this.trackingListTask = trackingListTask;
}
public TrackingListTask
getTrackingListTask() {
return trackingListTask;
}
@Override
protected void
executeInternal(JobExecutionContext arg0) throws
JobExecutionException {
this.getTrackingListTask().generateTrackingListReport();
}
}
What is “Caused by:
ClientAbortException:
java.net.SocketException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:413)”
Solution :
This happens
when a connection to the browser is broken before a page is done loading.
Can happen
for a variety of reasons:
The user
closed the browser before the page loaded.
Their
internet connection failed during loading.
They went to
another page before the page loaded.
The browser
timed the connection out before the page loaded (would have to be a large
page).
In 99
percent of cases, it can be safely ignored.
As I try to compile classes in my workspace, it show this
error message "illegal
character: \65279 when using file encoding UTF8".
Use notepad++ and use
it.
PropertyUtils.getProperty(Object,String) :-you can pass the
property as a string parameter and you will get a object as a return type after
that you can check by using instanceof operator and verify which kind of
properties are there .
Example:
public static Object
getProperties(Object object, String fieldName){
try {
return PropertyUtils.getProperty(object,fieldName);
}
catch
(IllegalAccessException | InvocationTargetException
|
NoSuchMethodException e) {
throw new RMControllerException(RMConstants.EXCEPTION_MSG_INVALID_METHOD_INVOCATION, e);
}
}
Uses:
Object propValue =
RMUtil.getProperties(quarterlyTaskTrackTO, field);
if (propValue != null) {
if
(propValue instanceof String) {
cell.setCellValue((String)
propValue);
} else
if (propValue instanceof Integer) {
cell.setCellType(XSSFCell.CELL_TYPE_NUMERIC);
cell.setCellValue((Integer)
propValue);
}
else if (propValue instanceof BigDecimal) {
BigDecimal
decimal = (BigDecimal)propValue;
cell.setCellType(XSSFCell.CELL_TYPE_NUMERIC);
cell.setCellValue(decimal.doubleValue());
}
if you're using a recent enough copy of Apache
POI (eg 3.8) then encrypted .xls files (HSSF) and .xlsx files (XSSF) can be
decrypted (proving you have the password!)
At the moment you can't write out encrypted
excel files though, only un-encrypted ones.
Notes for
Apache POI in terms of improvement
If you are using Apache POI to generate large
excel file, please take note the sheet.autoSizeColumn((short) p); line because
this will impact the performance.
HashMap
and ConcurrentHashMap in java
1.ConcurrentHashMap does not allow
NULL key and values.
While In HashMap there can only be one
null key and value but it will allow once otherwise it will override the
previous one.
2.ConcurrentHashMap is thread-safe
while HashMap is not thread-safe.
(In multiple threaded environment HashMap is usually faster than
ConcurrentHashMap.
As only single thread can access the
certain portion of the Map in ConcurrentHashMap thus reducing the performance.)
3. HashMap can be synchronized by
using synchronizedMap(HashMap) method ,by using this
method we get a HashMap object which
is equivalent to the HashTable object .
So every modification is performed on Map is locked on Map object.
public class TestHashMap {
public
static void main(String[] args) {
Map<String,
String> hashMap = new HashMap<String, String>();
hashMap.put(null,
null);
hashMap.put("X1",
null);
hashMap.put("C1",
"Acathan");
hashMap.put("S1",
null);
Map<String,
String> synchronizedHashMaps = Collections
.synchronizedMap(hashMap);
Set<String>
keySet = synchronizedHashMaps.keySet(); // Synchronizing on HashMap, not on Set
synchronized
(synchronizedHashMaps) {
Iterator<String>
itr = keySet.iterator(); // Must be in synchronized block
while
(itr.hasNext()) {
System.out.println(itr.next());
}
}
}
}
ConcurrentHashMap synchronizes or
locks on the certain portion of the Map.
To optimize the performance of
ConcurrentHashMap , Map is divided into different partitions depending upon the
Concurrency level . So that we do not need to synchronize the whole Map Object.
Bucket level lock you can achieved in
ConcurrentHashMap that is the reason ConcurrentHashMap provide better
performance
as compared to synchronizedHashMap
& Hashtable in multi thread environment.
SynchronizedHashMap: Every read/write
operation needs to acquire lock where as there is no locking at the time of
object
reading in the ConcurrentHashMap.
Why
concurrentMap is better over synchronizedHashMap in performance?
ConcurrentHashMap would only lock
certain part of the map, hence other thread can access differed part of the map
and make use of the data. But in HashMap which is synchronized the lock is done
on whole map, because of which other thread
Cannot access till the lock is
released, hence it slows down data access, that is why ConcurrentHashMap was introduced.
Comments
Post a Comment