Tuesday, March 24, 2009

SCOP


Nearly all proteins have structural similarities
with other proteins and, in many cases, share a
common evolutionary origin. The knowledge of
these relationships makes important contributions to
molecular biology and to other related areas of
science. It is central to our understanding of the
structure and evolution of proteins. It will play an
important role in the interpretation of the sequences
produced by the genome projects and, therefore, in
understanding the evolution of development.
The recent exponential growth in the number of
proteins whose structures have been determined by
X-ray crystallography and NMR spectroscopy
means that there is now a large and rapidly growing
corpus of information available. At present (January,
1995) the Brookhaven Protein Databank (PDB,
(Abola et al., 1987)) contains 3091 entries and the
number is increasing by about 100 a month. To
facilitate the understanding of, and access to, this
information, we have constructed the Structural
Classification of Proteins (scop) database. This
database provides a detailed and comprehensive
description of the structural and evolutionary
relationships of proteins whose three-dimensional
structures have been determined. It includes all proteins in the current version of the PDB and
almost all proteins for which structures have been
published but whose co-ordinates are not available
from the PDB.
The classification of protein structures in the
database is based on evolutionary relationships and
on the principles that govern their three-dimensional
structure. Early work on protein structures showed
that there are striking regularities in the ways in
which secondary structures are assembled (Levitt
& Chothia, 1976; Chothia et al., 1977) and in the
topologies of the polypeptide chains (Richardson,
1976, 1977; Sternberg & Thornton, 1976). These
regularities arise from the intrinsic physical and
chemical properties of proteins (Chothia, 1984;
Finkelstein&Ptitsyn, 1987) and provide the basis for
the classification of protein folds (Levitt & Chothia,
1976; Richardson, 1981). This early work has been
taken further inmore recent papers; see, for example,
Holm & Sander (1993), Orengo et al. (1993),
Overington et al. (1993) and Yee & Dill (1993). An
extensive bibliography of papers on the classification
and the determinants of protein folds is given in scop.
The method used to construct the protein
classification in scop is essentially the visual
inspection and comparison of structures though
various automatic tools are used to make the task
manageable and help provide generality. Given the current limitations of purely automatic procedures,
we believe this approach produces the most
accurate and useful results. The unit of classification
is usually the protein domain. Small
proteins, and most of those of medium size, have
a single domain and are, therefore, treated as a
whole. The domains in large proteins are usually
classified individually.
The classification is on hierarchical levels that
embody the evolutionary and structural relationships.
FAMILY. Proteins are clustered together into
families on the basis of one of two criteria that imply
their having a common evolutionary origin: first, all
proteins that have residue identities of 30% and
greater; second, proteins with lower sequence identities but whose functions and structures are
very similar; for example, globins with sequence
identities of 15%.
SUPERFAMILY. Families, whose proteins have
low sequence identities but whose structures and, in
many cases, functional features suggest that a
common evolutionary origin is probable, are placed
together in superfamilies; for example, actin, the
ATPase domain of the heat-shock protein and
hexokinase (Flaherty et al., 1991).
COMMONFOLD. Superfamilies and families are
defined as having a common fold if their proteins
have same major secondary structures in same
arrangement with the same topological connections.
In scop we give for each fold short descriptions of its
main structural features. Different proteins with the
same fold usually have peripheral elements of
secondary structure and turn regions that differ in
size and conformation and, in the more divergent
cases, these differing regions may form half or more
of each structure. For proteins placed together in the
same fold category, the structural similarities
probably arise from the physics and chemistry of
proteins favouring certain packing arrangements and
chain topologies (see above). There may, however,
be cases where a common evolutionary origin is
obscured by the extent of the divergence in sequence,
structure and function. In these cases, it is possible
that the discovery of new structures, with folds
between those of the previously known structures,
will make clear their common evolutionary relationship.
CLASS. For convenience of users, the different
folds have been grouped into classes. Most of the
folds are assigned to one of the five structural classes
on the basis of the secondary structures of which
they composed: (1) all alpha (for proteins whose
structure is essentially formed by a-helices), (2) all
beta (for those whose structure is essentially formed
by b-sheets), (3) alpha and beta (for proteins with
a-helices and b-strands that are largely interspersed),
(4) alpha plus beta (for those in which
a-helices and b-strands are largely segregated) and
(5)multi-domain (for those with domains of different
fold and for which no homologues are known at
present). Note that we do not use Greek characters
in scop because they are not accessible to all world
wide web viewers. More unusual proteins, peptides
and the PDB entries for designed proteins theoretical models, nucleic acids and carbohydrates,
have been assigned to other classes.
The number of entries, families, superfamilies and
common folds in the current version of scop are
shown in Figure 1. The exact position of boundaries
between family, superfamily and fold are, to some
degree, subjective. However, because all proteins
that could conceivably belong to a family or
superfamily are clustered together in the encompassing
fold category, some users may wish to
concentrate on this part of the database.
In addition to the information on structural and
evolutionary relationships, each entry (for which
co-ordinates are available) has links to images of the
structure, interactive molecular viewers, the atomic
co-ordinates, sequence data and homologues and
MEDLINE abstracts (see Table 1).
Two search facilities are available in scop. The
homology search permits users to enter a sequence
and obtain a list of any structures to which it has
significant levels of sequence similarity. The key
word search finds, for a word entered by the user,
matches from both the text of the scop database and
the headers of Brookhaven Protein Databank
structure files.
To provide easy and broad access, we have made
the scop database available as a set of tightly coupled
hypertext pages on the world wide web (WWW).
This allows it to be accessed by any machine on the
internet (including Macintoshes, PCs and workstations)
using freeWWWreader programs, such as
Mosaic (Schatz & Hardin, 1994). Once such a
program has been started, it is necessary only to
‘‘open’’ URL:
http://scop.mrc-lmb.cam.ac.uk/scop/
to obtain the ‘‘home’’ page level of the database.
In Figure 2 we show a typical page from the
database. Each page has buttons to go back to the
top-level home page, to send electronic mail to the
authors, and to retrieve a detailed help page.
Navigating through the tree structure is simple;
selecting any entry retrieves the appropriate page. In
addition, buttons make it possible tomove within the
hierarchy in other manners, such as ‘‘upwards’’ to
obtain broader levels of classification.
The scop database was originally created as a
tool for understanding protein evolution through
sequence-structure relationships and determining if
new sequences and new structures are related to
previously known protein structures. On a more
general level, the highest levels of classification
provide an overview of the diversity of protein
structures now known and would be appropriate
both for researchers and students. The specific lower
levels should be helpful for comparing individual
structures with their evolutionary and structurally
related counterparts. In addition, we have also found
that the search capabilities with easy access to data
and images make scop a powerful general-purpose
interface to the PDB.
As new structures are released by PDB and
published, they will be entered in scop and revised versions of the database will be made available on
WWW. Moreover, as our formal understanding of
relationships between structure, sequence function
and evolution grows, it will be embodied in
additional facilities in the database.

Tuesday, March 17, 2009

XML (Extensible Markup Language)

XML (Extensible Markup Language)
XML (Extensible Markup Language) is a general-purpose specification for creating custom markup languages. It is classified as an extensible language, because it allows the user to define the mark-up elements. XML's purpose is to aid information systems in sharing structured data, especially via the Internet, to encode documents, and to serialize data; in the last context, it compares with text-based serialization languages such as JSON, YAML and S-Expressions.
XML's set of tools helps developers in creating web pages but its usefulness goes well beyond that. XML, in combination with other standards, makes it possible to define the content of a document separately from its formatting, making it easy to reuse that content in other applications or for other presentation environments. Most importantly, XML provides a basic syntax that can be used to share information between different kinds of computers, different applications, and different organizations without needing to pass through many layers of conversion.
XML began as a simplified subset of the Standard Generalized Markup Language (SGML), meant to be readable by people via semantic constraints; application languages can be implemented in XML. These include XHTML, RSS, MathML, GraphML, Scalable Vector Graphics, MusicXML, and others. Moreover, XML is sometimes used as the specification language for such application languages.
XML is recommended by the World Wide Web Consortium (W3C). It is a fee-free open standard. The recommendation specifies lexical grammar and parsing requirements.
Correctness
An XML document has two correctness levels:
· Well-formed. A well-formed document conforms to the XML syntax rules; e.g. if a start-tag (< >) appears without a corresponding end-tag (), it is not well-formed. A document not well-formed is not in XML; a conforming parser is disallowed from processing it.
· Valid. A valid document additionally conforms to semantic rules, either user-defined or in an XML schema, especially DTD; e.g. if a document contains an undefined element, then it is not valid; a validating parser is disallowed from processing it.
Well-formedness
If only a well-formed element is required, XML is a generic framework for storing any amount of text or any data whose structure can be represented as a tree. The only indispensable syntactical requirement is that the document has exactly one root element (also known as the document element), i.e. the text must be enclosed between a root start-tag and a corresponding end-tag, known as a "well-formed" XML document:
This is a book...
The root element can be preceded by an optional XML declaration element stating what XML version is in use (normally 1.0); it might also contain character encoding and external dependencies information.

The specification requires that processors of XML support the pan-Unicode character encodings UTF-8 and UTF-16 (UTF-32 is not mandatory). The use of more limited encodings, e.g. those based on ISO/IEC 8859, is acknowledged, widely used, and supported.
Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA.
XML comments start with . Two consecutive dashes (--) may not appear anywhere in the text of the comment.

In any meaningful application, additional markup is used to structure the contents of the XML document. The text enclosed by the root tags may contain an arbitrary number of XML elements. The basic syntax for one element is:
Element Content
The two instances of »element_name« are referred to as the start-tag and end-tag, respectively. Here, »Element Content« is some text which may again contain XML elements. So, a generic XML document contains a tree-based data structure. Here is an example of a structured XML document:

Basic bread
Flour
Yeast
Water
Salt

Mix all ingredients together.
Knead thoroughly.
Cover with a cloth, and leave for one hour in warm room.
Knead again.
Place in a bread baking tin.
Cover with a cloth, and leave for one hour in warm room.
Bake in the oven at 180(degrees)C for 30 minutes.


Attribute values must always be quoted, using single or double quotes, and each attribute name may appear only once in any single element.
XML requires that elements be properly nested—elements may never overlap, and so must be closed in the order opposite to which they are opened. For example, this fragment of code below cannot be part of a well-formed XML document because the title and author elements are closed in the wrong order:

Book on Logic<author>Aristotle
One way of writing the same information in a way which could be incorporated into a well-formed XML document is as follows:

Book on Logic Aristotle
XML provides special syntax for representing an element with empty content. Instead of writing a start-tag followed immediately by an end-tag, a document may contain an empty-element tag. An empty-element tag resembles a start-tag but contains a slash just before the closing angle bracket. The following three examples are equivalent in XML:



An empty-element may contain attributes:

Entity references
An entity in XML is a named body of data, usually text. Entities are often used to represent single characters that cannot easily be entered on the keyboard; they are also used to represent pieces of standard ("boilerplate") text that occur in many documents, especially if there is a need to allow such text to be changed in one place only.
Special characters can be represented either using entity references, or by means of numeric character references. An example of a numeric character reference is "€", which refers to the Euro symbol by means of its Unicode codepoint in hexadecimal.
An entity reference is a placeholder that represents that entity. It consists of the entity's name preceded by an ampersand ("&") and followed by a semicolon (";"). XML has five predeclared entities:
· & (& or "ampersand")
· < (< or "less than")
· > (> or "greater than")
· ' (' or "apostrophe")
· " (" or "quotation mark")
Here is an example using a predeclared XML entity to represent the ampersand in the name "AT&T":
AT&T
Additional entities (beyond the predefined ones) can be declared in the document's Document Type Definition (DTD). A basic example of doing so in a minimal internal DTD follows. Declared entities can describe single characters or pieces of text, and can reference each other.



]>

&copyright-notice;

Numeric character references
Numeric character references look like entity references, but instead of a name, they contain the "#" character followed by a number. The number (in decimal or "x"-prefixed hexadecimal) represents a Unicode code point. Unlike entity references, they are neither predeclared nor do they need to be declared in the document's DTD. They have typically been used to represent characters that are not easily encodable, such as an Arabic character in a document produced on a European computer. The ampersand in the "AT&T" example could also be escaped like this (decimal 38 and hexadecimal 26 both represent the Unicode code point for the "&" character):
AT&T
AT&T
Similarly, in the previous example, notice that "©" is used to generate the “©” symbol.
See also numeric character references.
Well-formed documents
In XML, a well-formed document must conform to the following rules, among others:
· Non-empty elements are delimited by both a start-tag and an end-tag.
· Empty elements may be marked with an empty-element (self-closing) tag, such as . This is equal to .
· All attribute values are quoted with either single (') or double (") quotes. Single quotes close a single quote and double quotes close a double quote.[citation needed]
· To include a double quote inside an attribute value that is double quoted, or a single quote inside an attribute value that is single quoted, escape the inner quote mark using a Character_entity_reference. This is necessary when an attribute value must contain both types (single and double quotes) or when you do not have control over the type of quotation a particular XML editor uses for wrapping attribute values. These character entity references are predefined in XML and do not need to be declared even when using a DTD or Schema: " and '. You may also use the numeric character entity references (hex) " and ' or their equivalent decimal notations " and '.
· Tags may be nested but must not overlap. Each non-root element must be completely contained in another element.
· The document complies with its declared character encoding. The encoding may be declared or implied externally, such as in "Content-Type" headers when a document is transported via HTTP, or internally, using explicit markup at the very beginning of the document. When no such declaration exists, a Unicode encoding is assumed, as defined by a Unicode Byte Order Mark before the document's first character. If the mark does not exist, UTF-8 encoding is assumed.
Element names are case-sensitive. For example, the following is a well-formed matching pair:
...
whereas these are not
...
...
By carefully choosing the names of the XML elements one may convey the meaning of the data in the markup. This increases human readability while retaining the rigor needed for software parsing.
Choosing meaningful names implies the semantics of elements and attributes to a human reader without reference to external documentation. However, this can lead to verbosity, which complicates authoring and increases file size.
Automatic verification
It is relatively simple to verify that a document is well-formed or validated XML, because the rules of well-formedness and validation of XML are designed for portability of tools. The idea is that any tool designed to work with XML files will be able to work with XML files written in any XML language (or XML application). Here are some examples of ways to verify XML documents:
· load it into an XML-capable browser, such as Firefox or Internet Explorer
· use a tool like xmlwf (usually bundled with expat)
· parse the document, for instance in Ruby:
irb> require "rexml/document"
irb> include REXML
irb> doc = Document.new(File.new("test.xml")).root
Validity
By leaving the names, allowable hierarchy, and meanings of the elements and attributes open and definable by a customizable schema or DTD, XML provides a syntactic foundation for the creation of purpose-specific, XML-based markup languages. The general syntax of such languages is rigid — documents must adhere to the general rules of XML, ensuring that all XML-aware software can at least read and understand the relative arrangement of information within them. The schema merely supplements the syntax rules with a set of constraints. Schemas typically restrict element and attribute names and their allowable containment hierarchies, such as only allowing an element named 'birthday' to contain one element named 'month' and one element named 'day', each of which has to contain only character data. The constraints in a schema may also include data type assignments that affect how information is processed; for example, the 'month' element's character data may be defined as being a month according to a particular schema language's conventions, perhaps meaning that it must not only be formatted a certain way, but also must not be processed as if it were some other type of data.
An XML document that complies with a particular schema/DTD, in addition to being well-formed, is said to be valid.
An XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic constraints imposed by XML itself. A number of standard and proprietary XML schema languages have emerged for the purpose of formally expressing such schemas, and some of these languages are XML-based, themselves.
Before the advent of generalised data description languages such as SGML and XML, software designers had to define special file formats or small languages to share data between programs. This required writing detailed specifications and special-purpose parsers and writers.
XML's regular structure and strict parsing rules allow software designers to leave parsing to standard tools, and since XML provides a general, data model-oriented framework for the development of application-specific languages, software designers need only concentrate on the development of rules for their data, at relatively high levels of abstraction.
Well-tested tools exist to validate an XML document "against" a schema: the tool automatically verifies whether the document conforms to constraints expressed in the schema. Some of these validation tools are included in XML parsers, and some are packaged separately.
Other usages of schemas exist: XML editors, for instance, can use schemas to support the editing process (by suggesting valid elements and attributes names, etc).
DTD
The oldest schema format for XML is the Document Type Definition (DTD), inherited from SGML. While DTD support is ubiquitous due to its inclusion in the XML 1.0 standard, it is seen as limited for the following reasons:
· It has no support for newer features of XML, most importantly namespaces.
· It lacks expressiveness. Certain formal aspects of an XML document cannot be captured in a DTD.
· It uses a custom non-XML syntax, inherited from SGML, to describe the schema.
DTD is still used in many applications because it is considered the easiest to read and write.
XML Schema
A newer XML schema language, described by the W3C as the successor of DTDs, is XML Schema, or more informally referred to by the initialism for XML Schema instances, XSD (XML Schema Definition). XSDs are far more powerful than DTDs in describing XML languages. They use a rich datatyping system, allow for more detailed constraints on an XML document's logical structure, and must be processed in a more robust validation framework. XSDs also use an XML-based format, which makes it possible to use ordinary XML tools to help process them, although XSD implementations require much more than just the ability to read XML.
RELAX NG
Another popular schema language for XML is RELAX NG. Initially specified by OASIS, RELAX NG is now also an ISO international standard (as part of DSDL). It has two formats: an XML based syntax and a non-XML compact syntax. The compact syntax aims to increase readability and writability but, since there is a well-defined way to translate the compact syntax to the XML syntax and back again by means of James Clark's Trang conversion tool, the advantage of using standard XML tools is not lost. RELAX NG has a simpler definition and validation framework than XML Schema, making it easier to use and implement. It also has the ability to use datatype framework plug-ins; a RELAX NG schema author, for example, can require values in an XML document to conform to definitions in XML Schema Datatypes.
ISO DSDL and other schema languages
The ISO DSDL (Document Schema Description Languages) standard brings together a comprehensive set of small schema languages, each targeted at specific problems. DSDL includes RELAX NG full and compact syntax, Schematron assertion language, and languages for defining datatypes, character repertoire constraints, renaming and entity expansion, and namespace-based routing of document fragments to different validators. DSDL schema languages do not have the vendor support of XML Schemas yet, and are to some extent a grassroots reaction of industrial publishers to the lack of utility of XML Schemas for publishing.
Some schema languages not only describe the structure of a particular XML format but also offer limited facilities to influence processing of individual XML files that conform to this format. DTDs and XSDs both have this ability; they can for instance provide attribute defaults. RELAX NG and Schematron intentionally do not provide these; for example the infoset augmentation facility.
International use
XML supports the direct use of almost any Unicode character in element names, attributes, comments, character data, and processing instructions (other than the ones that have special symbolic meaning in XML itself, such as the open corner bracket, "<"). Therefore, the following is a well-formed XML document, even though it includes both Chinese and Cyrillic characters:

<俄語>Китайский
Displaying on the web
Generally, generic XML documents do not carry information about how to display the data.Without using CSS or XSLT, a generic XML document is rendered as raw XML text by most web browsers. Some display it with 'handles' (e.g. + and - signs in the margin) that allow parts of the structure to be expanded or collapsed with mouse-clicks.
In order to style the rendering in a browser with CSS, the XML document must include a reference to the stylesheet:

Note that this is different from specifying such a stylesheet in HTML, which uses the element.
XSLT (XSL Transformations) can be used to alter the format of XML data, either into HTML or other formats that are suitable for a browser to display.
To specify client-side XSLT, the following processing instruction is required in the XML:

Client-side XSLT is supported by many web browsers. Alternatively, one may use XSLT to convert XML into a displayable format on the server rather than being dependent on the end-user's browser capabilities. The end-user is not aware of what has gone on 'behind the scenes'; all they see is well-formatted, displayable data.
See the XSLT article for examples of XSLT in action.
Extensions
· XPath makes it possible to refer to individual parts of an XML document. This provides random access to XML data for other technologies, including XSLT, XSL-FO, XQuery etc. XPath expressions can refer to all or part of the text, data and values in XML elements, attributes, processing instructions, comments etc. They can also access the names of elements and attributes. XPaths can be used in both valid and well-formed XML, with and without defined namespaces.
· XInclude defines the ability for XML files to include all or part of an external file. When processing is complete, the final XML infoset has no XInclude elements, but instead has copied the documents or parts thereof into the final infoset. It uses XPath to refer to a portion of the document for partial inclusions.
· XQuery is to XML and XML Databases what SQL and PL/SQL are to relational databases: ways to access, manipulate and return XML.
· XML Namespaces enable the same document to contain XML elements and attributes taken from different vocabularies, without any naming collisions occurring.
· XML Signature defines the syntax and processing rules for creating digital signatures on XML content.
· XML Encryption defines the syntax and processing rules for encrypting XML content.
· XPointer is a system for addressing components of XML-based internet media.
XML files may be served with a variety of Media types. RFC 3023 defines the types "application/xml" and "text/xml", which say only that the data is in XML, and nothing about its semantics. The use of "text/xml" has been criticized as a potential source of encoding problems but is now in the process of being deprecated. RFC 3023 also recommends that XML-based languages be given media types beginning in "application/" and ending in "+xml"; for example "application/atom+xml" for Atom. This page discusses further XML and MIME.

JDBC

Java Database Connectivity (JDBC)
Java Database Connectivity (JDBC) is an API for the Java programming language that defines how a client may access a database. It provides methods for querying and updating data in a database. JDBC is oriented towards relational databases.
The Java 2 Platform, Standard Edition, version 1.4 (J2SE) includes the JDBC 3.0 API together with a reference implementation JDBC-to-ODBC bridge, enabling connections to any ODBC-accessible data source in the JVM host environment. This bridge is native code (not Java), closed source, and only appropriate for experimental use and for situations in which no other driver is available, not least because it provides only a limited subset of the JDBC 3.0 API, as it was originally built and shipped with JDBC 1.0 for use with old ODBC v2.0 drivers (ODBC v3.0 was released in 1996).
Overview
JDBC has been part of the Java Standard Edition since the release of JDK 1.1. The JDBC classes are contained in the Java package java.sql. Starting with version 3.0, JDBC has been developed under the Java Community Process. JSR 54 specifies JDBC 3.0 (included in J2SE 1.4), JSR 114 specifies the JDBC Rowset additions, and JSR 221 is the specification of JDBC 4.0 (included in Java SE 6).
JDBC allows multiple implementations to exist and be used by the same application. The API provides a mechanism for dynamically loading the correct Java packages and registering them with the JDBC Driver Manager. The Driver Manager is used as a connection factory for creating JDBC connections.
JDBC connections support creating and executing statements. These may be update statements such as SQL's CREATE, INSERT, UPDATE and DELETE, or they may be query statements such as SELECT. Additionally, stored procedures may be invoked through a JDBC connection. JDBC represents statements using one of the following classes:
· Statement – the statement is sent to the database server each and every time.
· PreparedStatement – the statement is cached and then the execution path is pre determined on the database server allowing it to be executed multiple times in an efficient manner.
· CallableStatement – used for executing stored procedures on the database.
Update statements such as INSERT, UPDATE and DELETE return an update count that indicates how many rows were affected in the database. These statements do not return any other information.
Query statements return a JDBC row result set. The row result set is used to walk over the result set. Individual columns in a row are retrieved either by name or by column number. There may be any number of rows in the result set. The row result set has metadata that describes the names of the columns and their types.
There is an extension to the basic JDBC API in the javax.sql.
Example
The method Class.forName(String) is used to load the JDBC driver class. The line below causes the JDBC driver from some jdbc vendor to be loaded into the application. (Some JVMs also require the class to be instantiated with .newInstance().)
Class.forName( "com.somejdbcvendor.TheirJdbcDriver" );
In JDBC 4.0, it's no longer necessary to explicitly load JDBC drivers using Class.forName(). See JDBC 4.0 Enhancements in Java SE 6.
When a Driver class is loaded, it creates an instance of itself and registers it with the DriverManager. This can be done by including the needed code in the driver class's static block. e.g. DriverManager.registerDriver(Driver driver)
Now when a connection is needed, one of the DriverManager.getConnection() methods is used to create a JDBC connection.
Connection conn = DriverManager.getConnection(
"jdbc:somejdbcvendor:other data needed by some jdbc vendor",
"myLogin",
"myPassword" );
The URL used is dependent upon the particular JDBC driver. It will always begin with the "jdbc:" protocol, but the rest is up to the particular vendor. Once a connection is established, a statement must be created.
Statement stmt = conn.createStatement();
try {
stmt.executeUpdate( "INSERT INTO MyTable( name ) VALUES ( 'my name' ) " );
} finally {
//It's important to close the statement when you are done with it
stmt.close();
}
Note that Connections, Statements, and ResultSets often tie up operating system resources such as sockets or file descriptors. In the case of Connections to remote database servers, further resources are tied up on the server, e.g., cursors for currently open ResultSets. It is vital to close() any JDBC object as soon as it has played its part; garbage collection should not be relied upon. Forgetting to close() things properly results in spurious errors and misbehaviour. The above try-finally construct is a recommended code pattern to use with JDBC objects.
Data is retrieved from the database using a database query mechanism. The example below shows creating a statement and executing a query.
Statement stmt = conn.createStatement();
try {
ResultSet rs = stmt.executeQuery( "SELECT * FROM MyTable" );
try {
while ( rs.next() ) {
int numColumns = rs.getMetaData().getColumnCount();
for ( int i = 1 ; i <= numColumns ; i++ ) {
// Column numbers start at 1.
// Also there are many methods on the result set to return
// the column as a particular type. Refer to the Sun documentation
// for the list of valid conversions.
System.out.println( "COLUMN " + i + " = " + rs.getObject(i) );
}
}
} finally {
rs.close();
}
} finally {
stmt.close();
}
Typically, however, it would be rare for a seasoned Java programmer to code in such a fashion. The usual practice would be to abstract the database logic into an entirely different class and to pass preprocessed strings (perhaps derived themselves from a further abstracted class) containing SQL statements and the connection to the required methods. Abstracting the data model from the application code makes it more likely that changes to the application and data model can be made independently.
An example of a PreparedStatement query. Using conn and class from first example.
PreparedStatement ps = conn.prepareStatement( "SELECT i.*, j.* FROM Omega i, Zappa j WHERE i.name = ? AND j.num = ?" );
try {
// In the SQL statement being prepared, each question mark is a placeholder
// that must be replaced with a value you provide through a "set" method invocation.
// The following two method calls replace the two placeholders; the first is
// replaced by a string value, and the second by an integer value.
ps.setString(1, "Poor Yorick");
ps.setInt(2, 8008);

// The ResultSet, rs, conveys the result of executing the SQL statement.
// Each time you call rs.next(), an internal row pointer, or cursor,
// is advanced to the next row of the result. The cursor initially is
// positioned before the first row.
ResultSet rs = ps.executeQuery();
try {
while ( rs.next() ) {
int numColumns = rs.getMetaData().getColumnCount();
for ( int i = 1 ; i <= numColumns ; i++ ) {
// Column numbers start at 1.
// Also there are many methods on the result set to return
// the column as a particular type. Refer to the Sun documentation
// for the list of valid conversions.
System.out.println( "COLUMN " + i + " = " + rs.getObject(i) );
} // for
} // while
} finally {
rs.close();
}
} finally {
ps.close();
} // try
A typical implementation model of Java-RMI using stub and skeleton objects. Java 2 SDK, Standard Edition, v1.2 removed the need for a skeleton.
The Java Remote Method Invocation API, or Java RMI, a Java application programming interface, performs the object-oriented equivalent of remote procedure calls.
Two common implementations of the API exist:
1. The original implementation depends on Java Virtual Machine (JVM) class representation mechanisms and it thus only supports making calls from one JVM to another. The protocol underlying this Java-only implementation is known as Java Remote Method Protocol (JRMP).
2. In order to support code running in a non-JVM context, a CORBA version was later developed.
Usage of the term RMI may denote solely the programming interface or may signify both the API and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP) denotes the RMI interface delegating most of the functionality to the supporting CORBA implementation.
The programmers of the original RMI API generalized the code somewhat to support different implementations, such as an HTTP transport. Additionally, work was done to CORBA, adding a pass-by-value capability, to support the RMI interface. Still, the RMI-IIOP and JRMP implementations do not have fully identical interfaces.
RMI functionality comes in the package java.rmi, while most of Sun's implementation is located in the sun.rmi package. Note that with Java versions before Java 5.0 developers had to compile RMI stubs in a separate compilation step using rmic. Version 5.0 of Java and beyond no longer require this step.Jini offers a more advanced version of RMI in Java - it functions similarly but provides more advanced searching capabilities and mechanisms for distributed object applications.

J2EE

Introduction to J2EE
Java Platform, Enterprise Edition Version 2 or J2EE is a widely used platform for server programming in the Java programming language. The Java EE Platform differs from the Java Standard Edition Platform (Java SE) in that it adds libraries which provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server.

Nomenclature, standards and specifications
The platform was known as Java 2 Platform, Enterprise Edition or J2EE until the name was changed to Java EE in version 5. The current version is called Java EE 5. The previous version is called J2EE 1.4.
Java EE is defined by its specification. As with other Java Community Process specifications, Java EE is also considered informally to be a standard since providers must agree to certain conformance requirements in order to declare their products as Java EE compliant; albeit with no ISO or ECMA standard.
Java EE includes several API specifications, such as JDBC, RMI, e-mail, JMS, web services, XML, etc, and defines how to coordinate them. Java EE also features some specifications unique to Java EE for components. These include Enterprise JavaBeans, servlets, portlets (following the Java Portlet specification), JavaServer Pages and several web service technologies. This allows developers to create enterprise applications that are portable and scalable, and that integrate with legacy technologies. A Java EE "application server" can handle the transactions, security, scalability, concurrency and management of the components that are deployed to it, meaning that the developers should be able to concentrate more on the business logic of the components rather than on infrastructure and integration tasks.

History
The original J2EE specification was developed by Sun Microsystems.
The J2EE 1.2 SDK was released in December 1999. Starting with J2EE 1.3, the specification was developed under the Java Community Process. Java Specification Request (JSR) 58 specifies J2EE 1.3 and JSR 151 specifies the J2EE 1.4 specification.The J2EE 1.3 SDK was first released by Sun as a beta in April 2001. The J2EE 1.4 SDK beta was released by Sun in December 2002.The Java EE 5 specification was developed under JSR 244 and the final release was made on May 11, 2006.
The Java EE 6 specification has been developed under JSR 316 and is scheduled for release in May, 2009.
General APIs
The Java EE APIs includes several technologies that extend the functionality of the base Java SE APIs.
javax.ejb.*
The Enterprise JavaBean's 1st and 2nd API defines a set of APIs that a distributed object container will support in order to provide persistence, remote procedure calls (using RMI or RMI-IIOP), concurrency control, and access control for distributed objects. This package contains the Enterprise JavaBeans classes and interfaces that define the contracts between the enterprise bean and its clients and between the enterprise bean and the ejb container. This package contains the maximum number of Exception classes (16 in all) in Java EE 5 SDK.
javax.transaction.*
These packages define the Java Transaction API (JTA).
javax.xml.stream
This package contains readers and writers for XML streams. This package contains the only Error class in Java EE 5 SDK.
javax.jms.*
This package defines the Java Message Service (JMS) API. The JMS API provides a common way for Java programs to create, send, receive and read an enterprise messaging system's messages. This package has the maximum number of interfaces (43 in all) in the Java EE 5 SDK.
javax.faces.component.html
This package defines the JavaServer Faces (JSF) API. JSF is a technology for constructing user interfaces out of components.
javax.persistence
This package contains the classes and interfaces that define the contracts between a persistence provider and the managed classes and the clients of the Java Persistence API. This package contains the maximum number of annotation types (64 in all) and enums (10 in all) in the Java EE 5 SDK.
Certified application servers
Java EE 5 certified
· Sun Java System Application Server Platform Edition 9.0, based on the open-source server GlassFish
· GlassFish
· JBoss Application Server Version 5 [1] [2]
· Apache Geronimo 2.0
· Apache OpenEJB via Apache Geronimo
· IBM WebSphere Application Server Community Edition 2.0, based on Apache Geronimo
· IBM WebSphere Application Server V7
· WebLogic Application Server 10.0 from BEA Systems
· Oracle Containers for Java EE 11
· SAP NetWeaver Application Server, Java EE 5 Edition from SAP
· JEUS 6, an Application Server from TmaxSoft
J2EE 1.4 certified
· JBoss 4.x, an open-source application server from JBoss.
· Apache Geronimo 1.0, an open-source application server
· Pramati Server 5.0
· JOnAS, an open-source application server from ObjectWeb
· Oracle Application Server 10g
· Resin, an application server with integrated XML support
· SAP NetWeaver Application Server, Java EE 5 Edition from SAP AG
· Sun Java System Web Server
· Sun Java System Application Server Platform Edition 8.2
· IBM WebSphere Application Server (WAS)
· BEA Systems WebLogic server 8
· JEUS 5 from TmaxSoft