JDBC supports connection pooling, which essentially involves keeping open a cache of database connection objects and making them available for immediate use for any application that requests a connection. Instead of performing expensive network roundtrips to the database server, a connection attempt results in the re-assignment of a connection from the local cache to the application. When the application disconnects, the physical tie to the database server is not severed, but instead, the connection is placed back into the cache for immediate re-use, substantially improving data access performance.
To get more of it checkout these links
From my limited research, I understand tomcat implements connection pool by default.
Here is the link taking at length about it http://www.javapractices.com/Topic75.cjp
Also during the research came up with this nice article by the Martin Fowler talking about the design decisions of allowing certain business logic in the database rather than handling them exclusively in the application software (esp things like orderby, filtering tools (WHERE,LIKE etc))
Here's the link
This was typically the point made by the oracle database legend Tom Kyte in the article JDBC : SQL vs PL/SQL, Which performs better
Simple anology(not entrirely a perfect analogy) ,
when we know we need to grep for say automountd process to just know the pid of the process
instead of ps -aef|grep auto
a simple ps -a -o comm,pid|grep auto
will be more effective.
This design problem is tackled at across various layers. A simple typical case is the OS, where we typically end up getting huge data( say in truss or ps output), there after prunning the processed(cpu) data using grep,awk like utilities. A tool which stops from generating the unwanted data from being generated always scores over the basic tools we use.