Hibernate 3.6.0.Final + PostgreSQL + CLOBs

I recently upgraded a project I’m working on to Hibernate 3.6.0.Final from 3.5.6 and realized that one of my entities that had a CLOB (character large object) was pooping out.  I was getting an exception stack track similar to:

Caused by: org.postgresql.util.PSQLException: Bad value for type long : <table border="0" cellspacing="0" cellpadding="0" id="productDetailLineItems"><thead><tr><td rowspan="2"><input type="hidden" name="productGroupId" id="productGroupId" value="101111"/>Item Number</td><td rowspan="2">Motor HP</td><td rowspan="2">Price</td></tr></thead><tbody><tr><form method="post" id="4581000" name="4581000" action=""><td>4581000</td><td><span style="fraction"><sup>1</sup>/<sub>2</sub></span></td><td><input type="button" onclick="javascript:addToCart('4581000');" value="$prc4581000" /></td></form></tr></tbody></table>
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.toLong(AbstractJdbc2ResultSet.java:2690) [:]
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getLong(AbstractJdbc2ResultSet.java:1995) [:]
at org.postgresql.jdbc3.Jdbc3ResultSet.getClob(Jdbc3ResultSet.java:44) [:]
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getClob(AbstractJdbc2ResultSet.java:373) [:]
at org.jboss.resource.adapter.jdbc.WrappedResultSet.getClob(WrappedResultSet.java:516) [:6.0.0.Final]
at org.hibernate.type.descriptor.sql.ClobTypeDescriptor$2.doExtract(ClobTypeDescriptor.java:70) [:3.6.0.Final]
at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:64) [:3.6.0.Final]
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:253) [:3.6.0.Final]
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:249) [:3.6.0.Final]
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:229) [:3.6.0.Final]
at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:330) [:3.6.0.Final]
at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2265) [:3.6.0.Final]
at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1527) [:3.6.0.Final]
at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1455) [:3.6.0.Final]
at org.hibernate.loader.Loader.getRow(Loader.java:1355) [:3.6.0.Final]
at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:611) [:3.6.0.Final]
at org.hibernate.loader.Loader.doQuery(Loader.java:829) [:3.6.0.Final]
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274) [:3.6.0.Final]
at org.hibernate.loader.Loader.loadEntity(Loader.java:2037) [:3.6.0.Final]
... 167 more

I do local development on my MacBook Pro with PostgreSQL 9.0.2 and will point my machine at the client’s DB2 database once in a while for pushes to their development system.  When I point the system at their DB dev database, everything works fine.  Pointing back at my PostgreSQL, the entity craps out.  I tried going back to the old 8.0 driver like some people online suggested with no luck.  I hadn’t tried using this entity since I upgraded to Hibernate 3.6 because I also have been moving from OpenEJB to JBoss 6 so it could have been that as well.

Ultimately I found a simple solution.  PostgreSQL apparently doesn’t play well with Hibernate 3.6 and requires a specific Hibernate annotation on your entity (I’m using JPA entities by the way).  On your CLOB, in addition to the @Lob, add:

@Type(type="org.hibernate.type.StringClobType")

Redeploy your application and Hibernate should behave properly.

Spring 3.0 + Hibernate 3.3.2 + JBoss Cache 2 + JTA = Fail

I’ve spent the past two days trying to get a distributed secondary Hibernate cache working with a Spring 3 application.  The application is web-based running on JBoss 5.1 so I figured the best approach would be to use JBoss Cache, since it’s automatically configured and available in JNDI when you use the “all” configuration.

Hibernate 3.3.2 is configured inside of Spring using the Annotation-based session factory bean.  Because I’m using JTA to manage transactions and Hibernate’s current session, I need to make sure that the secondary cache, whatever I choose, is aware of the transaction manager.  I originally had EHCache 2.0.1 hooked into Hibernate via Hibernate configuration parameters passed into Spring’s bean.  I was not setting the cache factory parameter on this bean.  Everything works fine in this configuration and it recognizes the JTA transactions.

This application needs to be clustered horizontally – ensuring each component of the solution is failover-ready.  JBoss Cache 2 is baked into JBoss AS 5.1 and a logical choice to pick.  Hibernate has an extension JAR and it’s a simple interface, especially when you’re pulling out the cache from JNDI.

The problem?  The JTA transaction manager that Spring proxies isn’t available to JBoss Cache.  JBoss Cache (on JBoss AS) runs at the container-level inside of a different classloader.  Spring holds onto the reference to the transaction manager proxy inside of a ThreadLocal variable which is NOT accessible to the container.  To get around this, I tried using Hibernate’s implementation of the JBoss Transaction Manager lookup class, thinking since it’s JTA it’ll work.

It didn’t.

When I tried bringing up the application, I ran into errors surrounding the current transaction.  At one point, the transaction was timing out during a cache pre-load process the application has.  In another configuration, the application loaded but after a few minutes the app exceptioned out with a stale JDBC connection to the database.  It was obvious to me that the JTA transaction surrounding the cache didn’t sync with the Spring JTA transaction.

I eventually gave up on JBoss Cache and am going with EHCache 2.0 using JGroups to synchronize each node.  The “all” configuration of JBoss AS 5.1 has both JGroups and JBoss Cache preconfigured.  I hooked into the synchronous JGroups UDP configuration, using the same JVM parameters the JBoss XML files use so that my app doesn’t need a special deployment for each server.  Theoretically, the configuration will work in a cluster without having to change anything around.

We’ll see how the clustering works later on but for now, single node in cluster mode is working just fine.