Documentation  |   Table of Contents   |  < Previous   |  Next >   |  Index

2    Palm OS Databases

Memory, Databases, and Files

Exploring Palm OS®

This chapter describes how to work with Palm OS® databases. Two separate header files declare the APIs you use: SchemaDatabases.h (documented in Chapter 7, "Schema Databases,") and DataMgr.h (documented in Chapter 4, "Data Manager,"). In addition, the File Streaming APIs, which allow you to access classic databases using a mechanism very similar to UNIX file streams, are declared in FileStream.h (and documented in Chapter 5, "File Stream,").

This chapter is divided into the following major sections:

Database Overview
Working with Schema Databases
Working with Non-Schema Databases
File Streaming Layer

IMPORTANT: To access data or resources on secondary storage (such as expansion cards), you use a different set of APIs. See Chapter 3, "Virtual File Systems," for more information.

Database Overview ^TOP^

A traditional file system first reads all or a portion of a file into a memory buffer from disk, using or updating the information in the memory buffer, and then writes the updated memory buffer back to disk. Because Palm Powered handhelds have limited amounts of dynamic RAM and use nonvolatile RAM instead of disk storage, a traditional file system is not optimal for storing and retrieving Palm OS user data. Thus, except when working with expansion media (an SD card, Memory Stick, and the like), Palm OS doesn't make use of a traditional file system. Instead of files, Palm OS applications work with databases.

Databases organize related rows (for schema databases) or records (for non-schema databases); each belongs to one and only one database. A database may be a collection of all address book entries, all datebook entries, and so on. A Palm OS application can create, delete, open, and close databases as necessary, just as a traditional file system can create, delete, open, and close a traditional file.

For those new to Palm OS programming, the term "database" can be somewhat misleading. Palm OS Cobalt supports three different types of database, some of which look more like conventional databases than others. Schema databases, which were introduced in Palm OS Cobalt, bear a strong resemblance to relational databases. Data is organized into tables, which consist of rows and columns. Schema databases use the concept of a schema to define the structure of a table row. Unlike relational databases, however, schema databases don't allow you to perform joins and other complex operations.

The other two database types are classified as "non-schema" databases because they are significantly less structured. There are two supported non-schema database types:

  • Classic databases are supported for compatibility with earlier versions of Palm OS. All versions of Palm OS back to Palm OS 1.0 support this database format, and this is the format used by applications running on Palm OS Cobalt through PACE.
  • Extended databases are an "extended" version of classic databases. There are three primary differences between classic and extended databases: extended databases records can exceed 64K in length (classic records cannot); extended databases are uniquely identified by a combination of name and creator ID (classic databases are uniquely identified by name alone); and extended databases can store data using the processor's native endianness (classic databases must store record data using big-endianness, for compatibility with the 68K-based Dragonball CPU used in the early Palm OS devices).

Palm OS Cobalt applications that must remain compatible with an earlier OS release—perhaps a version of the application exists that runs on earlier versions of Palm OS and this application must be able to work with the earlier version's data—will use classic databases. Those Palm OS Cobalt applications that don't have such a compatibility requirement should use either extended or schema databases instead. Which to use depends on the nature of the application. Schema databases provide a great deal of support for organizing the database contents and for security, at the expense of performance. Extended databases, on the other hand, are faster to read and write, but less secure and less structured—meaning that your application has to do the work of maintaining and interpreting record contents itself.

Schema Databases ^TOP^

Non-schema databases treat their contents as lists of mostly opaque records. The Data Manager knows just enough about each record to understand category assignment, modification status, and deletion status. Applications are entirely responsible for structuring and interpreting database record contents. Traditional Palm OS applications, written for 68K-based handhelds and for PACE, work exclusively with classic databases.

Schema databases add a layer of abstraction to the record contents. This extra layer of abstraction allows you to create more flexible applications, with improved sharing of data between applications. Because the Data Manager knows more about the structure of the database rows, it can provide additional capabilities, such as system-managed, optimized, and internationalized sorting. It lets you bind variables to various row fields, so as you move from one row to another the bound variables are automatically updated with the contents of the corresponding row's fields. And, you can create cursors, subsets of a database table's rows selected and sorted based upon application-specific criteria. Schema databases have other advantages as well:

  • They provide more standardized data storage.
  • Schema databases make synchronization simpler and more efficient.
  • Schema databases can be more easily extended with additional fields.
  • It is much easier to create conduits for schema databases, and it is easier to integrate a schema database with a database on the desktop computer or on a server.

Resources and Resource Databases ^TOP^

Applications can use the Data Manager to retrieve and save chunks of data conveniently. Non-schema databases that are designated as resource databases tag each chunk of data with a unique resource type and resource ID. These tagged data chunks are called resources. Resource databases are almost identical in structure to other non-schema databases except for a slight amount of increased storage overhead per resource record (two extra bytes).

Resources are typically used to store the user interface elements of an application, such as images, fonts, dialog layouts, and so forth. Part of building an application involves creating these resources and merging them with the actual executable code. In the Palm OS environment, an application is, in fact, simply a resource database with the executable code stored as one or more code resources and the graphics elements and other miscellaneous data stored in the same database as other resource types.

Applications may also find resource databases useful for storing and retrieving application preferences, saved window positions, state information, and so forth. These preferences settings can be stored in a separate resource database.

Uniquely Identifying Databases ^TOP^

As in previous releases of Palm OS, classic databases must be uniquely identified by name. Schema and extended databases, however, are uniquely identified by a combination of the database's name and its creator ID. Thus, schema and extended database names need only be unique for a single creator ID: two such databases with the same name can reside on a single handheld as long as their creator IDs differ.

Database Attributes ^TOP^

In addition to the records that make up the database's contents—and in addition to the schemas that define the structure of the rows in a schema database table—all Palm OS databases have a set of flags that describe various aspects of the database itself, plus a set of dates identifying when the database was created, last modified, and last backed up. As well, non-schema databases have an Application Info block to hold application settings and the like, and a Sort Info block to control the ordering of database records (schema databases use a different mechanism to control row ordering; see "Cursors").

You obtain the database attribute flags and dates, along with handles for the Sort Info block and the Application Info block if working with a non-schema database, by calling DmDatabaseInfo().

Automatic Database Backup and Restore ^TOP^

Palm OS Cobalt version 6.1 can be configured by a licensee to back up the contents of the RAM storage heaps to some sort of non-volatile NAND flash. In the event that the RAM storage heaps are corrupted or are lost for some reason, the storage heaps can then be restored to their saved state. This provides an additional level of data reliability beyond what's already provided by HotSync. Devices without backup batteries may take advantage of this backup and restore capability to prevent data loss between power on/off sessions.

For security, the backup is performed to a private internal VFS volume that can only be accessed by the Data Manager, only for purposes of backup and restore.

Backup is triggered on a limited set of events:

  • Database close. Any time that a database is closed, the database is backed up to the non-volatile store.
  • Database create. Upon creation, the database is backed up. This takes care of installed databases that are never modified and thus not otherwise backed up.
  • A successful call to DmSetDatabaseInfo(). Whenever a call to DmSetDatabaseInfo() succeeds, the database information is backed up to the non-volatile store.
  • Device sleep. Whenever the device goes to sleep as a result of the normal system sleep functionality, the Data Manager iterates through all open databases and backs them up to the non-volatile store. This takes care of those databases that are opened by an application and not closed until the application exits, and those databases that are opened by background threads that are running when the system goes to sleep.
  • An explicit call to DmInitiateAutoBackupOfOpenDatabase().

Every time the device resets with an indication that the contents of RAM may have been lost, the backup volume is restored to RAM. Before restoring the backup contents, a consistency check is performed on the backup and an attempt is made to fix any inconsistencies. Databases are only restored under these circumstances; developers cannot trigger a database restore programmatically.

Working with Schema Databases ^TOP^

Schema databases consist of one or more tables. All of the rows in a given table have the same structure.

All data in a schema database table is represented in the form of two-dimensional tables. A table contains zero or more rows and one or more columns. All rows in a table have the same sequence of columns, but with a different series of values in those columns. Note that a row doesn't have to have a value for a column; the special value NULL can be used to indicate that the value is undefined.

As with a relational database, operations are defined by logic, not by the position of a row within a table. That is, you ask for all rows where (x = 3) and not for the first, third, and fifth rows, for example. The rows of a schema database table are in arbitrary order—the order in which they appear doesn't necessarily reflect the order in which they were entered or in which they are stored.

One of the strengths of the relational approach (which applies to schema databases) is that you can deal with the data as information and, ideally, not worry about the details of how it is represented or physically maintained in the database itself. Having to deal with these kinds of implementation details makes extended and classic databases more difficult to manage.

Schemas and Tables ^TOP^

In Palm OS Cobalt, a schema is simply the collective definitions of a table's columns. While there is no single structure or identifier that represents a schema, the DbTableDefinitionType structure contains a count of the number of columns in the table and a series of pointers to the structures that define those table columns: essentially, the schema (this structure also contains the table's name, which isn't part of the schema itself).

Each schema database can be heterogeneous in that it can support multiple tables. Because each table's definition includes the column definitions for that table—the schema—two tables can have the same schema, yet changes to one table's schema doesn't affect the other.

Tables can be defined at the time a database is created, or added later.

Schema access is gated by the access restrictions for the database. Read-only access to a database implies read-only access to all of that database's schemas (and thus any attempt to modify the schema will fail). See "Secure Databases" for more information on database access restrictions.

Logical (External) vs. Physical (Internal) Views

Schemas allow the Data Manager to decouple the logical (external) view of your data from the physical (internal) view. When working with a schema database you manipulate row data in terms of data types defined in the column property sets—this is the logical data view. In actual fact, however, the Data Manager stores row data internally in an unpublished variant format: the physical data view. This decoupling facilitates changes to internal data formats without affecting existing database consumers.

Data types defined in column property sets are Palm OS primitives or their vectors. The Data Manager converts between its physical data types and the logical data types that are enforced during field get and set operations.

Column Properties

A schema is a collection of column property sets. A column property set is represented as a DbSchemaColumnDefnType structure. This structure contains the following:

ID
A 32-bit application-defined identifier. This ID must be unique for a given table.
Name
An application-defined name for the column. The column name must be unique for a given table. It can be up to 32 bytes in length, including the terminating null character, and must be a valid SQL identifier consisting only of 7-bit ASCII characters. The column name is stored in a single application-defined language encoding.
Data Type
The type of data contained within the database column.
Size
The size, in bytes, for the column. For columns that contain variable-length strings, blobs, and vectors, this is the maximum size of the string, blob, or vector. For all other types this is the actual size of the type.
Attributes
A set of flags that indicate whether the column data can be modified, whether the column was added to the table after the table was created, and whether or not the column data will be synchronized. (Modifications made to a "non-syncable" column's data don't change the modification state for the row, and thus by themselves don't cause the row to be synchronized during a HotSync operation.)

These are built-in column properties provided by the Data Manager. In addition to these built-in properties, you can define custom properties for a column: properties that facilitate application-specific semantics for columns. For more information on manipulating the column definitions that make up a schema, see "Working with Column Definitions."

Column Data Types

Palm OS Cobalt schema databases support the column data types listed in Table 2.1.

Table 2.1  Supported schema column data types 

Palm Primitive/Logical Types

Description

Storage Requirement

Range/Size

uint8_t

Unsigned char

1 byte

0 to 255

uint16_t

Unsigned short int

2 bytes

0 to 65535

uint32_t

Unsigned int

4 bytes

0 to 4294967295

uint64_t

8 bytes

int8_t

Signed char

1 byte

-128 to 127

int16_t

Signed short int

2 bytes

-32768 to 32767

int32_t

Signed int

4 bytes

-2147483648 to 2147483647

int64_t

8 bytes

float

Float

4 bytes

double

Double

8 bytes

Boolean

True /False value

1 byte

0 or 1

DateTimeType

Date-Time type

14 bytes

DateType

Date expressed as an absolute date

2 bytes

TimeType

2 bytes

time_t

(dbDateTimeSecs) Time in seconds since the UNIX epoch

4 bytes

-2147483648 to 2147483647

char

Fixed-length character string

m bytes, where m is the statically-defined length and 1 <= m <= 255

1 <= m <= 255, where m is the maximum defined length.

VarChar

Variable-length character string

n+4, where n is the actual string length and where n <= m. m is the maximum defined length and 1 <= m <= 232

1 <= m <= 232, where m is the maximum defined length.

blob

Variable-length array of bytes.

n+4, where n is the actual string length and where n <= m. m is the maximum defined length and 1 <= m <= 232

1 <= m <= 232, where m is the maximum defined length.

Vector

Variable-length vectors of Palm primitive numeric, string, and date-time types. See Table 2.2, below, for a list of supported vector types.

n+4, where n is the number of bytes needed to contain the vector.

232 bytes.

Table 2.2  Supported vector types 

Vector Types

Usage

uint8_t vectors

uint8_t[]

uint16_t vectors

uint16_t[]

uint32_t vectors

uint32_t[]

uint64_t vectors

uint64_t[]

float vectors

float[]

double vectors

double[]

Boolean vectors

Boolean[]

DateTimeType vectors

DateTimeType[]

DateType vectors

DateType[]

TimeType vectors

TimeType[]

String vectors

Array of null-terminated strings, with an extra terminating null character marking the end of the vector. For instance, using 7-bit ASCII: "String1\0String2\0String3\0\0"


NOTE: In a string vector, the null characters must be interpreted as encoding-dependent null characters instead of null bytes. A null character may be multi-byte for a specific encoding scheme.

Database, Table, and Column Identifiers

Schema databases are uniquely identified by a combination of their name and their creator code. However, most of the schema database functions take database identifiers of the type DatabaseID. The function DmFindDatabase() returns a database ID for an existing database, while DbCreateDatabase() creates a new database (given a name, creator code, and type) and returns a database ID for the newly-created database.

Database tables are identified by name. There is no need for a numeric "table identifier." However, each database does maintain an array of tables that you can access by index. This array is zero-based; its indices range from zero to n-1, where n is the number of tables defined for that database. This value can be obtained by calling DbNumTables(). Given the index of a table within a database, you can translate it into the table's name by calling DbGetTableName().

A column is uniquely identified by either the column's descriptive name or by a 32-bit ID (both must be unique). These application-defined column names and IDs allow multiple applications within a given application context to share a common semantic understanding of a given column type. For instance, two applications might select a name of "EMNO" for the employee number column of the "EMPLOYEE" database and use column-based search and retrieval of values in the column named "EMNO". The design-time specification of both column identifiers and table names facilitates the development of public metadata interfaces for databases and encourages generic data exchange based on these interfaces.

As with tables in a database, you can iterate through the columns in a table. To obtain the number of columns in a given table, call DbNumColumns(). You can retrieve the definitions for each of the columns in the row by calling DbGetColumnDefinitions(). To obtain the ID of an individual column given its index (which again ranges from 0 to n-1, where n is the number of columns in the table), use DbGetColumnID().

Creating, Modifying, and Deleting Tables

You can create tables either at the time you create a database or after the fact. Each table is a DbTableDefinitionType structure; this structure contains the table's name and an array of column definitions. Allocate memory as needed for the DbTableDefinitionType structures (and for the DbSchemaColumnDefnType structures needed to define the table's columns), and initialize them as appropriate for your application. Then, either supply them when creating your database (with DbCreateDatabase() or DbCreateSecureDatabase(), as appropriate), or add them to an existing database with DbAddTable().

You can remove a table from a database only if the table contains no non-deleted rows. If the table contains non-deleted rows, create a cursor that selects all of the table's rows, and then call DbCursorRemoveAllRows(). Once the table is empty, call DbRemoveTable() to remove the table from the database.

When modifying an existing table, you are limited to adding and removing columns and modifying custom column properties. Get the existing table definition by calling DbGetTableSchema(). Use DbAddColumn() to add a a column to an existing table.

Working with Column Definitions

Each table maintains a list of column definitions. As discussed in "Database, Table, and Column Identifiers," given an index into that list you can obtain the corresponding column ID. This ID is necessary to work with individual columns, but isn't needed to obtain the complete set of column definitions that make up a schema.

To obtain the column definitions for a table, you can use one of two functions. DbGetAllColumnDefinitions() retrieves all column definitions for the specified table, while DbGetColumnDefinitions() retrieves one or more column definitions for the table—supply an array of column IDs indicating which column definitions are to be retrieved. Both functions return an array of column definitions (DbSchemaColumnDefnType structures); when you are done with this array you must release the memory consumed by the array with a call to DbReleaseStorage().

In addition to any custom properties you define for a column definition, all columns have a set of built-in properties. These built-in properties are read-only, to prevent applications from modifying existing data row columns in a way that can impact other data consumers. Each built-in property has a corresponding constant definition that can be used as input to a generic accessor—DbGetColumnPropertyValue()—that retrieves the value of the specified column property. The constant definitions for the built-in properties are predefined; see "DbSchemaColumnProperty" for the constants themselves. The following are the built-in properties for a column:

  • Name (must be unique)
  • Data type
  • Size (maximum byte size for variable-length strings, blobs, and vectors)
  • Attributes

Unlike the built-in properties, custom properties may be read, written and deleted. Custom property IDs must fall outside the built-in property ID range. That is, they must be greater than dbColumnPropertyUpperBound.

For a given column, define custom properties using DbSetColumnPropertyValue() or DbSetColumnPropertyValues(). If the specified property ID does not exist, a custom property is created with the specified ID and value. If the specified property ID exists, its value is updated to the new value.

The value of any property—whether built-in or custom—can be obtained by calling either DbGetColumnPropertyValue(), to obtain a single property value, or DbGetColumnPropertyValues() to obtain multiple property values at one time.

To remove a property from a given column, call DbRemoveColumnProperty(). Note that this function is very different from DbRemoveColumn(): whereas DbRemoveColumnProperty() removes only a property from a column, DbRemoveColumn() removes an entire column from a table, along with that column's data.

Row Attributes

Schema database rows can have the attributes listed in Table 2.3.

Table 2.3  Schema database row attributes

Attribute

Description

dbRecAttrArchive

The row's data is preserved until the next HotSync. When the dbRecAttrArchive bit is set, the dbRecAttrDelete bit is set as well, so archived rows are otherwise treated like deleted rows.

dbRecAttrDelete

The row has been deleted.

dbRecAttrReadOnly

The row is read-only, and cannot be written to.

dbRecAttrSecret

The row is private.


NOTE: The Data Manager does not place any semantics on the read-only attribute. It is up to the application to enforce the read-only semantics.

The read-only attribute is used to support certain record sharing scenarios that allow a user to view a record, but not to modify it. Note that schemas also allow the definition of "always writable" columns that allow particular fields to be writable in a read-only row. This might be used, for example, in a calendar event for a TV show that is read-only (you can't reschedule the show); the field containing the alarm information would be "always writable" allowing each user the option of setting an alarm.

Table 2.4 lists the functions that you use to get and set a schema database row's row ID, category, and attributes.

Table 2.4  Functions used to access row information

Categories

Categories are a user-controlled means of grouping or filtering records or rows. Non-schema databases allow records to be a member of only one of 15 categories, or "Unfiled." Schema database rows, on the other hand, can be a member of any combination of up to 255 categories (or none—the equivalent of "Unfiled"). Thus, where in a extended database a record might, say, have to either fall into the "Personal" or "Business" category, in a schema database a row could fall into both.

As with non-schema databases, category information is local to a database. However, unlike non-schema databases which store information about that database's categories in the Application Info block, schema databases rely upon an internal "category info" block to contain this information.

Information about the database's categories, such as the number and names of the categories, as well as the order in which they occur in a UI list, is controlled by the Category Manager. The Data Manager is only responsible for managing the category membership of individual database rows.

Category membership for a row is limited to the maximum number of categories that can be defined locally in a schema database. Since the maximum number of categories a database can support is limited to 255, any given row can only be a member of up to 255 categories.

In a non-schema database, records are always in one category ("Unfiled" is just a specific category). In a schema database, rows may be in one category, multiple categories, or none. The notion of "Unfiled" as a category doesn't make sense here since rows shouldn't be able to be in the "Unfiled" category and in other categories at the same time. Because applications can display or perform other operations on rows with no category membership, a row that is a member of no database categories could be thought of as "Unfiled." Note that the Category Manager controls how rows with no category membership are displayed to end users.

The Data Manager stores category IDs as category membership information for a record or row. Storing category IDs abstracts the Data Manager from any modifications performed on the internal category structure, such as adding or deleting a category.

The following functions let you manipulate a schema database row's category membership:

DbSetCategory()
Sets category membership for a single database row.
DbAddCategory()
Makes the specified row a member of one or more additional categories.
DbGetCategory()
Retrieves the category membership for the specified row.
DbNumCategory()
For a specified row, determines how many categories the row is a member of.
DbRemoveCategory()
Removes category membership in the specified categories from a single row.

These functions let you manipulate rows that meet the given category membership criteria:

DbIsRowInCategory()
Determines if a row has membership for the specified categories, depending on the given match mode criteria.
DbMoveCategory()
Replaces one or more categories with the specified category for all rows, depending on the given match mode criteria.
DbRemoveCategoryAllRows()
Removes category membership in the specified categories from all rows in the database, depending on the match mode criteria.
DbCursorOpenWithCategory()
Creates and opens a cursor containing all rows in the specified table that conform to a specified set of flags, ordered as specified. Rows are filtered based upon category membership.

The Application Info Block

Schema databases don't have a dedicated Application Info block. For application-specific data of the type found in a non-schema database's Application Info block, create a database table specifically for this purpose.

Schema Database Rows ^TOP^

As discussed in "Schemas and Tables," a schema database table can have zero or more rows, and each row within the table shares a common structure, or schema.

Rows are identified by a 32-bit identifier that is unique within the database. You supply the row ID (or, often, the cursor ID as discussed under "Cursors") when archiving rows, copying row contents, deleting rows, and the like. In the rare instance that you find yourself with a row ID independent of the table from which it came, you can determine to which table the row belongs by calling DbGetTableForRow().

Creating New Rows

To create a row, construct an array of DbSchemaColumnValueType structures, one for each of the row's values. To add your row to a table (you can't add a row to a database without adding it to a database table), you pass the structures to DbInsertRow(). Assuming that the row was added to the table successfully, this function returns the row ID of your new row. Optionally, you can add an "empty" row by calling DbInsertRow() without supplying the DbSchemaColumnValueType structures. See the description of DbInsertRow() for more information.

Rows added to a table are added to the end of the database. You aren't given the opportunity to specify the position of the row within the table. The schema database APIs also don't include a function for altering the position of a row within a table. That is because when working with schema database rows you often are working within the context of a cursor, within which you can perform such operations.

Reading Data

Columns in a row are identified either by a 32-bit application-defined ID or by an index. The index is zero-based and ranges from 0 <= index < n, where n is the number of columns in the schema. Note that the index of columns added after the schema is initially created may change, so do not make persistent references to table columns by their index.

Individual row column values may only be extracted using column IDs. The Data Manager provides a function that returns a column's ID given its index: DbGetColumnID().

DbGetColumnValue() retrieves a single column value. This function is restrictive, however, in the sense that it does not allow value retrieval into user-allocated buffers but always returns a reference to a storage heap buffer. Also, for greater efficiency most applications will want to retrieve multiple columns using either DbGetColumnValues() or DbCopyColumnValues().

For columns containing string or vector data, you can retrieve partial column values through the use of an offset. This is useful for columns containing large strings or blobs where, for space efficiency it makes sense to only read or write a portion of the column's data.

When retrieving values, you can retrieve them either by copy or by reference.

Value Copy
You allocate output buffers, enclose each in a DbSchemaColumnValueType structure, and pass them to the Data Manager by calling either DbCopyColumnValue() or DbCopyColumnValues(). The Data Manager then copies column data into the buffers.
Value Reference
You call either DbGetAllColumnValues(), DbGetColumnValues(), or DbGetColumnValue(), and receive back references to column data. This saves RAM by not requiring an additional buffer for column value storage. When you are done working with the data, you must explicitly release the Data Manager-allocated buffer with DbReleaseStorage(), which unlocks the row.

The storage locality of the buffers for the various value retrieval functions is detailed in Table 2.5 for different database types.

Table 2.5  Buffer storage locality for column value retrieval functions

Function

Non-Secure

Secure

DbGetAllColumnValues()
DbGetColumnValues()
DbGetColumnValue()

Data Manager returns references to storage-heap-based column values.

Data Manager returns references to dynamic-heap-based column values. References to storage heap values are not returned for secure databases.

DbCopyColumnValue()
DbCopyColumnValues()

Data Manager copies column values to user-allocated dynamic heap storage.

Data Manager copies column values into user-allocated dynamic heap storage.

The code excerpt in Listing 2.1 illustrates how you can retrieve a single column value with DbGetColumnValue().

Listing 2.1  Retrieving a single column value


status_t errCode; 
char nameP[25]; 
void *valueP; 
uint32_t valueSize; 
uint32_t columnID = 768; 
 
errCode = DbGetColumnValue(dbRef, rowID, columnID, 0,
   &valueP, &valueSize); 
if (errNone == errCode){ 
   // process each column value 
   memcpy(nameP, valueP, valueSize); 
} else { 
   ErrDisplay("Error in retrieving column value"); 
   return errCode; 
} 
 
// release storage heap buffer returned by the Data Manager 
DbReleaseStorage(dbRef, valueP); 

The code in Listing 2.2 is similar to the above, but it shows how to use DbGetAllColumnValues() to retrieve every column value for a database row with a single call.

Listing 2.2  Retrieving all column values


DbSchemaColumnValueType *columnValueArray; 
status_t errCode; 
uint32_t numColumns;  
 
errCode = DbGetAllColumnValues(dbRef, rowID,
   &numColumns, &columnValueArray); 
if (errNone == errCode){ 
   // iterate through the column value array 
   for (int i=0; i<numColumns; i++){ 
      if (errNone == columnValueArray[i].errCode){ 
         // process each column value 
      } else { 
         // handle error in retrieving column value. 
         ErrDisplay("Error in retrieving column value"); 
         break; 
      } 
   } 
} else { 
   ErrDisplay("Error in retrieving column values"); 
   return errCode; 
} 
 
// Release storage heap buffer returned by the Data Manager 
// This invalidates all columnValueArray[i].columnData 
// references. 
DbReleaseStorage(dbRef, columnValueArray); 
} 

In addition to retrieving a single column value or all column values, you can set up an array of column IDs and use DbGetColumnValues() to retrieve a subset of the row's values. Listing 2.3 illustrates the use of DbGetColumnValues() in this way.

Listing 2.3  Retrieving multiple, specific column values


DbSchemaColumnValueType *columnValueArray; 
status_t errCode; 
uint32_t columnIDArray[] = {768, 770, 771}; 
uint32_t numColumns = sizeof(columnIDArray)/sizeof(uint32_t); 
 
errCode = DbGetColumnValues(dbRef, rowID, numColumns,
   columnIDArray, &columnValueArray); 
if (errNone == errCode){ 
   // iterate through the column value array 
   for (int i=0; i<numColumns; i++){ 
      if (errNone == columnValueArray[i].errCode){ 
         // process each column value 
      } else { 
         // handle error in retrieving column value. 
         ErrDisplay("Error in retrieving column"); 
         break; 
      } 
   } 
} else { 
   ErrDisplay("Error in retrieving column values"); 
   return errCode; 
 
// Release storage heap buffer returned by the Data Manager. 
// This invalidates all columnValueArray[i].columnData 
// references. 
DbReleaseStorage(dbRef, columnValueArray); 

As a final example, Listing 2.4 shows how to retrieve multiple column values but have them copied into pre-allocated buffers by DbCopyColumnValues().

Listing 2.4  Copying multiple, specific column values


DbSchemaColumnValueType columnValueArray[4]; 
uint32_t numColumns = sizeof(columnValueArray) /
   sizeof(DbColumnValueType); 
uint32_t rowIndex; 
status_t errCode; 
 
typedef struct { 
   char userName[20]; 
   char userAddressLine1[25]; 
   char userAddressLine2[25]; 
   char userAddressLine3[25]; 
} userDetailsType; 
userDetailsType user; 
 
columnValueArray[0].columnID = 768; 
columnValueArray[0].data = user.userName; 
columnValueArray[0].dataSize = sizeof(user.userName); 
 
columnValueArray[1].columnID = 770; 
columnValueArray[1].data = user.userAddressLine1; 
columnValueArray[1].dataSize = sizeof(user.userAddressLine1); 
 
columnValueArray[2].columnID = 771; 
columnValueArray[2].data = user.userAddressLine2; 
columnValueArray[2].dataSize = sizeof(user.userAddressLine2); 
 
columnValueArray[3].columnID = 772; 
columnValueArray[3].data = user.userAddressLine3; 
columnValueArray[3].dataSize = sizeof(user.userAddressLine3); 
 
errCode = DbCopyColumnValues(dbRef, rowID,
   numColumns, columnValueArray); 
if (errNone == errCode){ 
   // iterate through the column value array to check  
   // for retrieval errors 
   for (int i =0 ; i < numColumns; i++){ 
      // process the user name column 
      // process each column value directly from the user 
      // structure or from columnValueArray[i].data. 
      if (errNone == columnValueArray[0].errCode) 
         FldSetTextPtr(fldP, user.username); 
      else { 
         // handle error in retrieving column value. 
         ErrDisplay("Error in retrieving column value"); 
         break; 
      } 
 
      // similarly, process the other columns... 
   } 
} else { 
   ErrDisplay("Error in retrieving column values"); 
   return errCode; 
} 
 
// no storage heap buffer release required here as column 
// values are retrieved in a user-allocated buffer 

Writing Data

Just as you can read either a single column value or multiple column values, you can also write a single column value or multiple column values. DbWriteColumnValue() writes a single column value to the database. As when reading, for greater efficiency when writing more than one column value call DbWriteColumnValues() rather than calling DbWriteColumnValue() multiple times.

Partial column value writes are also possible for string, blob and vector columns through the use of an offset. This is useful for columns that contain large strings or blobs where, for space efficiency reasons, it makes sense to only write a portion of the column value.

When calling either of these DbWrite...() functions, the Data Manager copies the input data values to the storage heap as row data. Because the database now contains a copy of the data, you may then free the input data.

Listing 2.5 shows how to use DbWriteColumnValue() to write a single column value to a schema database.

Listing 2.5  Writing a single column value


uint32_t columnID = 1034; 
char newName[] = "Terrence"; 
uint32_t nameSize = strlen(newName) + 1;  // include the null 
int32_t oldSize = -1;   // replace the entire column's data 
 
// this will overwrite old name with new name. Other 
// variations are possible depending on 
// combinations of bytesToReplace and srcBytes 
if (errNone != DbWriteColumnValue(dbRef, &rowID,
   columnID, 0, oldSize, newName, nameSize)) { 
   // handle error in writing column value. 
   ErrDisplay("Error in writing column value"); 
} 

Listing 2.6 shows how to use DbWriteColumnValues() to write multiple column values to a schema database.

Listing 2.6  Writing multiple column values


DbSchemaColumnValueType columnValueArray[3]; 
uint32_t columnIDArray[] = {1034, 1035, 1036}; 
uint32_t numColumns = sizeof(columnIDArray)/sizeof(uint32_t); 
status_t errCode; 
 
typedef struct { 
   uint32_t orderID; 
   char orderType[4]; 
   uint32_t orderQuantity; 
} orderDetailsType; 
 
orderDetailsType order; 
 
columnValueArray[0].data = order.orderID; 
columnValueArray[0].dataSize = sizeof(order.orderID); 
columnValueArray[0].columnID = columnIDArray[0]; 
 
columnValueArray[1].data = order.orderType; 
columnValueArray[1].dataSize = sizeof(order.orderType); 
columnValueArray[1].columnID = columnIDArray[1]; 
 
columnValueArray[2].data = order.orderQuantity; 
columnValueArray[2].dataSize = sizeof(order.orderQuantity); 
columnValueArray[2].columnID = columnIDArray[2]; 
 
if (errNone != DbWriteColumnValues(dbRef, &rowID,
   numColumns, columnValueArray)){ 
   // handle error in writing column value. 
   ErrDisplay("Error in writing column value"); 
} 

Deleting Rows

Delete individual database rows by calling DbDeleteRow(). To delete a set of rows in a single table, create a cursor that identifies those rows and then call DbCursorDeleteAllRows().

Cursors ^TOP^

Cursors simplify data access for schema databases. A cursor is a logical view of a subset of rows from a table, ordered as specified by the cursor. Once a cursor is created, applications can iterate the rows from the cursor, retrieve data from rows in the cursor, and to write data to rows in the cursor.

Cursors are temporary. They are not saved with the database. Cursors are simple to create and an application can have multiple cursors active at the same time, including multiple cursors on the same table.

With the exception of DbInsertRow(), schema database functions with row access semantics can take either a row ID or a cursor ID as a parameter. These are both uint32_t values and generally may be used interchangeably. The Data Manager derives the actual type of the parameter based on a value-encoding scheme it uses for row IDs; this scheme ensures that a row ID is always differentiable from a cursor ID. If you need to know whether a given identifier is a row ID or a cursor ID (or neither), you can make use of the functions DbIsRowID() and DbIsCursorID().

The rows in a cursor needn't be sorted. A cursor that is opened unsorted is said to use the default sort index. In this instance, the string you supply for the sql parameter in the DbCursorOpen...() call should consist of the name of the table containing the database rows to be included and an optional WHERE clause indicating which of the table's rows should be included in the cursor. (See "The WHERE Clause" for more information on the WHERE clause.)

Creating Cursors

Create a cursor with DbCursorOpen() or DbCursorOpenWithCategory(). To create a cursor you supply a reference to an open database; a SELECT statement that specifies the database table from which the rows are to be taken, an optional selection criteria (WHERE clause), and an optional sort criteria (ORDER BY clause); and a set of flags that indicate whether deleted or secret rows should be included in the cursor, whether the rows should be sorted by category, and so on. (See "Cursor Open Flags" for the complete set). If you use DbCursorOpenWithCategory() you also can limit the rows in the cursor to those that meet the specified category criteria.


IMPORTANT: The sort index—that is, the SELECT statement—that you supply when creating the cursor must have been added to the table prior to its use in the DbCursorOpen... call. See the documentation for the DbAddSortIndex() function for more information.

The SELECT Statement

You use a limited form of the standard SQL SELECT statement to specify the rows that make up the cursor and the order in which those rows are to occur. You pass this SELECT statement, as an ASCII string, to DbCursorOpen...(). The following is the basic format of the schema database SELECT statement:


[SELECT * FROM] tableName [WHERE column op arg] 
[ORDER BY (col1, col2, ...) [DESC | ASC | CASED | CASELESS] 
[, col...]] 

"SELECT * FROM" is entirely optional; its inclusion has no effect at this point: schema database cursors don't do projection. tableName is the only required part of this statement, and must identify the table from which the cursor rows are to be taken. The optional WHERE clause allows you to filter the rows to be included in the cursor; see "The WHERE Clause" for a complete description of this clause.

The ORDER BY clause, also optional, controls the sorting of the rows within the cursor. Schema databases support two levels of sort keys, using parenthesis to identify the levels. The ORDER BY clause is perhaps best illustrated by way of example:


myTable ORDER BY LNAME, FNAME DESC, (34, 56) ASC CASED 

The rows are sorted according to the column names and IDs as listed here. The first column ("LNAME", in the above example) gets the highest priority. The second column ("FNAME", in the above example) determines the order within duplicate values of the first. And the third column determines the order within duplicate values of the second. In this example the third column ID is a two-level key: column 34 is used unless that column is empty, in which case column 56 is used instead. DESC, ASC, and CASED are options that clarify how the sort is performed. The following options are allowed:

DESC
(or DESCENDING): sort in descending order.
ASC
(or ASCENDING): sort in ascending order. This is the default if neither DESC or ASC is specified.
CASED
Take case into account when sorting.
CASELESS
Ignore case when sorting. This is the default if neither CASED or CASELESS is specified.

Before you can use the SELECT statement when opening a cursor (other than one corresponding to the default sort index), you must have added to the database a sort index with a matching SELECT statement . This is done for efficiency reasons: schema databases maintain a list of rows in sorted order for each of the database's sort indices, and as a row is added, deleted, or modified the record lists for each sort index that applies to that row are updated. Because the lists are maintained in sorted order, the Data Manager doesn't have to perform a sort operation when you open a cursor that corresponds to an existing sort index.

Sort Indicies

Sort indices allow you to specify how table rows should be automatically sorted. These sort indices are maintained by the Data Manager and are stored as part of the database. Any application that has read authorization for a database can use the sort indices for that database. Any application that has write authorization for the database can add, remove, or edit the sort indices for a database.

There is no limit to the number of sort indices that you can define for a database, although for performance reasons you should limit the number of sort indices to a small number. Large numbers of sort indices affect the performance of adding, deleting, and modifying rows, because all indices must be adjusted appropriately as data in the database changes.

When creating a sort index, you use the format discussed under "The SELECT Statement" to specify the table name and the keys (by column name or ID) that constitute the sort index. A sort index can sort on multiple keys; one of those keys is designated as the primary sort key. The other key specifications are optional and constitute the secondary sort keys.

Each key definition consists of the set of columns that constitute the key, the sort order (ascending or descending), and an indication as to whether or not row comparisons should be made in a case-sensitive manner. A key can be composed of multiple columns, although all of a key's columns must be of the same type. During a sort index update, when comparing two rows, if a row does not contain data in the first column of the sort key, the next specified column is checked and so on until a column with data is found. If the data in these two columns is equal, the next non-empty specified columns are checked.

The Data Manager uses its own internal sorting and comparison routines to keep the index automatically sorted. Whenever a field is updated, all indices (except the default index) that use that field are automatically updated.

Sort indices support the data types listed in Table 2.6. Only columns of the listed types may be used for the sort indices. For dbChar and dbVarChar data types, you can indicate whether or not a case-sensitive comparison should be performed. Note that the Data Manager relies upon the Text Manager comparison APIs when comparing these data types. This ensures correct sorting with the appropriate case-sensitivity on localized string data. (Data is sorted using the current system locale.) Blob data (dbBlob) is compared using a simple memcmp().

Table 2.6  Data types supported by sort indices 

dbUInt8

dbUInt16

dbUInt32

dbUInt64

dbInt8

dbInt16

dbInt32

dbInt64

dbFloat

dbDouble

dbBoolean

dbDateTime

dbDate

dbTime

dbChar

dbVarChar

dbBlob

Application-provided comparison functions are not supported by sort indices, due to the performance overhead of having to call and potentially launch an application each time a field is modified.

Add a sort index to a database with DbAddSortIndex(). If you no longer need a particular sort index you can improve the efficiency of the database by removing it (so that the database no longer has to maintain a list of rows in sorted order for that sort index) by calling DbRemoveSortIndex(). Use the following functions to further manipulate the sort indices in a schema database:

DbNumSortIndexes()
Get the number of sort indices defined for a given database. Within a database the defined sort indices have index values that range from 0 to one less than this number. Thus this function is particularly useful when iterating through a database's sort indices.
DbGetSortDefinition()
Get a sort index given its position in the list of sort indices defined for a database.
DbHasSortIndex()
Determine whether a particular sort index has been defined for a database. This function takes the same string that you supply when adding a sort index to a database or opening a cursor.

When you no longer need a particular cursor, call DbCursorClose() to free all resources associated with the cursor.

An application can temporarily suspend automatic sorting of the currently opened database by calling DbEnableSorting() with the enable parameter set to false. This can be useful when doing a bulk update to the database, or during synchronization. Calling DbEnableSorting() with the enable parameter set to true will re-enable automatic sorting and causes the indices to be re-sorted.

The WHERE Clause

The Data Manager parses WHERE clauses and uses the information provided by applications to filter the set of rows returned as members of a cursor. For example, an application might request a cursor containing all rows where the value is greater then 42.

The general format of the WHERE clause is:


column_name_or_ID operator value 

In an SQL string the WHERE clauses must come after the table name and before an ORDER BY clause if one is provided. A simple example is "table WHERE AGE >= 42"; the resulting cursor would only contain rows where the value of the column named "AGE" is greater than or equal to 42.


NOTE: Although the general format of the WHERE clause indicates that you can use a column ID in place of the column name, this may not be supported in future releases. Developers should use column names when specifying a WHERE clause.

Complex requests are supported by using the operators AND and OR. Both of these operators take WHERE clauses as their operands, allowing you to string requests together. OR has a lower operator precedence then AND, so all of the AND conditions are evaluated before the OR conditions. You can use parenthesis to group sub-clauses if operator precedence is an issue.

The PS_LIKE operator allows applications to perform sub-string matching. The operand is compared with the value in the requested column using the TxtFindString() function. Positive matches are added to the cursor, while non-matches are not.

The IS NULL and IS NOT NULL operators allow you to determine if a column has a value or is NULL. A NULL column value represents a lack of any value for a column. These operators may be used on all column types.

Table 2.7 lists the supported operators and the column types they can be used with.

Table 2.7  WHERE clause operators 

Operator

Name

Supported Operand Types

=

Equal to

dbBoolean, dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

<>

Not equal to

dbBoolean, dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

<

Less than

dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

<=

Less than or equal to

dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

>

Greater than

dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

>=

Greater than or equal to

dbUInt32, dbInt32, dbUInt16, dbInt16, dbUInt8, dbInt8, dbDateTimeSecs, dbVarChar

PS_LIKE

PalmSource Like

dbVarChar

AND

And

Other WHERE clauses

OR

Or

Other WHERE clauses

IS NULL

Is NULL

All

IS NOT NULL

Is not NULL

All

Moving Through the Rows in a Cursor

When you create a cursor, the Data Manager takes a snapshot of the cursor's row IDs. This snapshot is used for iterating rows and is not affected by sorting updates. This is important to note, since operations that affect the number and order of rows in a database table won't affect the cursor contents until you explicitly refresh the cursor with DbCursorRequery().

Cursors have a concept of a current row. When you open a cursor the current row is initially positioned at the first row. DbCursorMove() alters that current position: it can be used in a variety of ways. For convenience, the Data Manager includes a set of macros that simplify the process of altering the current row position:

DbCursorMoveFirst()
Moves the current row position to the first row in the cursor.
DbCursorMoveLast()
Moves the current row position to the last row in the cursor.
DbCursorMoveNext()
Moves the current row position one row forward.
DbCursorMovePrev()
Moves the current row position one row backward.
DbCursorMoveToRowID()
Move the current row position to the row with the specified ID.
DbCursorSetAbsolutePosition()
Moves the current row position to the row with the specified index.

IMPORTANT: The first row in a cursor has an index value (position) of 1, similar to ODBC and JDBC. This differs from other aspects of schema database programming: the first column in a table has an index value of zero, and the first table in a database also has an index value of zero.

These macros, plus the fact that an error code is returned if you attempt to move beyond the bounds of the cursor, make it simple to iterate through a cursor's rows. See Listing 2.7 for an example of how to do this.

Listing 2.7  Iterating through a cursor's rows


status_t err; 
 
err = DbCursorMoveFirst(myCursor); 
if(err == errNone){ 
   while(!DbCursorIsEOF(myCursor)){ 
      // do something with the row data here, using the 
      // cursor to indicate the current row. Like this: 
      DbCopyColumnValue(dbRef, myCursor, ...); 
 
      DbCursorMoveNext(myCursor); 
   } 
} 

Because the various Data Manager functions that accept a row ID also accept a cursor ID, you needn't obtain the row ID of the current cursor row. As shown in the above example, just supply the cursor ID when calling a function such as DbCopyColumnValue().

Rows that have been modified are not moved to their new sort position until DbCursorRequery() is called. Similarly, any newly-added rows are not available to the cursor until DbCursorRequery() is called. By calling DbCursorRequery(), you can refresh the cursor at any time to reflect the latest changes and sorting. Note that when a refresh occurs the current row may move to a new position and future move operations will move from the new position, not the old position. For example, if you change the data in the current row such that the row would wind up at the end of the cursor, and you then call DbCursorRequery(), a subsequent call to DbCursorMoveNext() will result in a dmErrCursorEOF error.

Data Variable Binding

Cursors allow you to bind variables to columns of the schema. When a variable is bound to a column, that variable is automatically updated with the field value of the current row in the cursor whenever the cursor's current position is changed. You needn't call DbGetColumnValues(); the data is automatically copied to the bound variables for you.

When calling DbCursorBindData() (or DbCursorBindDataWithOffset()), you must specify the ID of the column to which the variable is to be bound, a pointer to a data buffer (the bound variable), the length of that buffer, a pointer to a separate variable to hold the size of the data returned in the data buffer if the column type is one that has varying length, and a pointer to a variable that will receive an error code that is set each time the variable is updated. The error code will be set to errNone if the data is copied to the bound variable successfully, to dmErrNoColumnData if the column contains no data, or to some other value if an error of a different sort occurred.

The DbCursorBindDataWithOffset() function is similar to DbCursorBindData() but adds an extra parameter that lets you specify a byte offset into the field's data. The data copied to the variable is taken from the database field at the specified offset. This allows you to bind a subset of the field data to a variable.

You need to call DbCursorBindData() (or DbCursorBindDataWithOffset()) once for each column that you want to automatically retrieve or set data. It is not necessary to bind every column in the schema; only bind those that you are interested in. See Listing 2.8 for an example of how to use data variable binding.

Listing 2.8  Data variable binding example


uint32_t cursor; 
char name[32]; 
char phone[24]; 
uint32_t sizeName; 
uint32_t sizePhone; 
status_t errName; 
status_t errPhone; 
 
dbRef = DbOpenDatabase(dbID, dmModeReadWrite, dbShareNone,
   idSortByName); 
 
// Create the cursor 
err = DbCursorOpen(dbRef, selectString, 0, &cursor); 
 
// Bind the local variables to columns 
DbCursorBindData(cursor, idColName, name, 32, &sizeName, 
&errName); 
DbCursorBindData(cursor, idColPhone, phone, 24, &sizePhone,
   &errPhone); 
 
// Read and display all rows in the cursor 
err = DbCursorMoveFirst(cursor); 
while (err == errNone){ 
   // Data is now in bound variables, so display it 
   DisplayNameAndPhone(name, sizeName, phone, sizePhone);	 
 
   // Get data for next row 
   err = DbCursorMoveNext(cursor); 
} 
 
// Change the field values in the 5th row in cursor 
DbCursorMoveTo(cursor, 4); 
strcpy(name, "John Doe"); 
sizeName = strlen(name); 
strcpy(phone, "555-1234"); 
sizePhone = strlen(phone); 
err = DbCursorUpdate(cursor); 
 
DbCursorClose(cursor); 

Variable binding can also be used to write data to the database. Simply set each bound variable to its desired value, then call DbCursorUpdate(). All values are written to the database for the current row. Note that for varying-length types (dbVarChar and dbBlob) you should also set the corresponding dataSize variable—specified when you bound the variable to the schema column—to indicate the size of the data to be written back to that field.


NOTE: You must call DbCursorUpdate() each time you wish to update a schema database row with the contents of its bound variables. Changing the cursor's current position transfers data from the row to the bound variables; it doesn't automatically transfer data from the bound variables to the row's fields.

Secure Databases ^TOP^

Some applications need to create secure databases that restrict access to the database. The Data Manager supports the creation of secure databases that are protected by application-defined access rules, which are also known as rule sets.

To create a secure database, use DbCreateSecureDatabase(). When a secure database is initially created, it is completely protected and cannot be opened until access rules allowing read or write access have been defined for the database. DbCreateSecureDatabase() returns an initial rule set for the newly-created secure database. The initial rule set contains only one rule that allows the calling application, and no other, to modify the database's access rules. This is known as modify access.

There are six different actions that can be used in access rules:

dbActionRead
dbActionWrite
dbActionDelete
dbActionSchemaEdit
dbActionBackup
dbActionRestore

Access rules can require a digital signature, require a password, require a PIN, or allow unrestricted access. You can define different access rules for each of the different actions defined by the Data Manager. For example, a secure database could be configured to allow read access to anyone, but require a password for all other access. Creating access rules that require digital signatures provides for databases that can only be accessed by applications that have the correct digital signature. For more information about access rules, see Exploring Palm OS: Security and Cryptography.

Security is maintained at the database level, not for each individual row. All rows in the database have the same level of security. There is no way to assign different levels of security for different rows in the same database.

Secure databases are only visible to the Data Manager process. They are stored in the Data Manager's private secure storage heap, separate from unsecured databases. Applications can use the Data Manager catalog functions, such as DmFindDatabase(), to determine if the secure database exists. But the database data is not available to an application until the application, the user, or both have been authorized.

When an application requests access to a secure database, the Data Manager first calls the Authorization Manager to verify that the current user and/or application has rights to access the database. If the Authorization Manager approves access to the secure database, the Data Manager copies the requested rows to the application process as needed. For read operations the database data is copied from the Data Manager's private secure storage heap to the application's dynamic heap. Note since the data is copied to the application's dynamic heap, the data is writable. Even though it is writable, the application must still call the appropriate Data Manager write functions to update the data. Writing directly to the copy of the data in the dynamic heap has no effect on the row data in the database.

The Data Manager requires dbActionRead authorization when using the following functions on a secure database:

  • DbOpenDatabase() with read-only mode
  • DbOpenDatabaseByTypeCreator() with read-only mode

The Data Manager requires dbActionWrite authorization when using the following functions on a secure database:

  • DbOpenDatabase() with write-only mode or read-write mode
  • DbOpenDatabaseByTypeCreator() with write-only mode or read-write mode
  • DmSetDatabaseInfo()
  • DmSetDatabaseProtection()

The Data Manager requires dbActionDelete authorization when using the following functions on a secure database:

  • DmDeleteDatabase()

The Data Manager requires dbActionSchemaEdit authorization when using following APIs on a secure database:

  • DbAddSchema()
  • DbAddColumn()
  • DbRemoveSchema()
  • DbRemoveColumn()
  • DbSetColumnPropertyValue()
  • DbSetColumnPropertyValues()
  • DbRemoveColumnProperty()

The Data Manager requires dbActionBackup authorization when using the following functions on a secure database:

  • DmBackupInitialize()
  • DmBackupUpdate()
  • DmBackupFinalize()

The Data Manager requires dbActionRestore authorization when using the following functions on a secure database:

  • DmRestoreInitialize()
  • DmRestoreUpdate()
  • DmRestoreFinalize()

All other Data Manager functions do not require authorization when used on a secure database, because they either require a previous open call before they can be used, or they do not perform an operation that necessitates authorization.

Once a secure database has been successfully authorized and opened, the Data Manager places a special key in the calling application's process that indicates that the process has been authorized to use the database. All Data Manager functions that take a DmOpenRef as a parameter use this special key as proof that the application is allowed access. This allows the Data Manager to detect forged DmOpenRefs without needing to call the Authorization Manager for every function. The key is revoked when the database is closed.

The Data Manager also provides a function, DbGetRuleSet(), that allows an application to get the current rule set for a secure database. Once the rule set is obtained, the application can modify the access rules for the secure database—provided that the application has modify access. Once a secure database is open, any change in the access rules do not apply until the database is reopened.

Note that the Data Manager does not provide functions for creating or modifying the access rules, only functions for creating secure databases. Your application must use functions provided by the Authorization Manager and the Authentication Manager to define the access rules for a secure database.

Secure Databases and HotSync Operations

The Data Manager restricts access to a secure database to only those applications and users authorized by the database's access rules. During a sync operation the HotSync® client on the handheld uses Data Manager functions to access the handheld databases on behalf of the conduits running on the desktop. The HotSync client application must be able to access secure databases that need to be synchronized or backed up.

In order for an application to ensure that its secure database is syncable, it must modify the database access rules so that the HotSync client has special "bypass" access using the AzmLibSetBypass() function. When the HotSync client is given bypass access, any conduit on the desktop is able to access the database (the HotSync process does not provide a way to restrict access on a per-conduit basis). The bypass access must be made for each action needed. Since you can grant the HotSync client bypass access for each action separately, you can, for example, give the HotSync client read access, but not write or delete access.

If the HotSync client is not given bypass access, it is subject to the normal access rules as defined by the application. For example, if an application defines the access rules for its database so that only signed applications have access (read, write, or delete), during a HotSync operation the database isn't syncable since the HotSync client doesn't have the proper signature required to access the data. Therefore to allow syncing of the database the application must give "bypass" access to the HotSync client, which essentially grants access both to the HotSync client and to any properly-signed application.

The HotSync client on the handheld maintains a notion of trusted desktops. The HotSync process doesn't allow syncing or backing up of secure databases to non-trusted desktops.

Backing up Secure Databases

When a secure database is backed up to the desktop it is sent to the desktop in encrypted form and is saved on the desktop encrypted. During a backup operation the Data Manager encrypts the data. This differs from a sync operation; when data is sent to the desktop during synchronization it is sent "in the clear"—it is not encrypted.

Secure databases that were encrypted during backup can only be decrypted and restored by the Data Manager. The Data Manager provides special functions to perform the backup and restore operations: you use a combination of DmBackupInitialize(), DmBackupUpdate(), and DmBackupFinalize() to back the database up, and DmRestoreInitialize(), DmRestoreUpdate(), and DmRestoreFinalize() to restore the data. Note that these backup and restore functions work with both secure and non-secure databases.

Concurrent Database Access ^TOP^

When you open a non-schema database with write access, you have exclusive access to that database: no one else can open that database while you have it open, even if they are just opening it with read access. Or, when you open a non-schema database with read access, no one else can open that same database with write access. This can be somewhat restrictive: on a communicator-style device, for example, if you are editing a record in the address book when the phone rings, the phone application running in another process couldn't open the address book in order to perform a caller-ID lookup.

Schema databases don't have this problem because they support concurrent access to a single database. Note that schema databases don't support concurrent write access: only one writer and multiple readers are allowed.

When opening a schema database you specify a share mode in addition to an access mode. The following share mode constants are supported for schema databases. Only one share mode can be specified when opening a database.

dbShareNone
No one else can open this database.
dbShareRead
Others can open this database with read access.
dbShareReadWrite
Others can open this database with read or write access.

Concurrent write access to the same database is not supported. That is, specifying an access mode of dmModeReadWrite and a share mode of dbShareReadWrite is not supported; an error will be returned if you attempt to open a database with this combination of access and share modes.

Table 2.8, below, shows all of the allowed combinations of access modes and share modes, and identifies which combinations can be used together (those that are marked "OK").

Table 2.8  Allowable concurrent access/share mode combinations

Mode=R
Share=None

Mode=R
Share=R

Mode=R
Share=R/W

Mode=R/W
Share=None

Mode=R/W
Share=R

Mode=R
Share=None

sharing not allowed

sharing not allowed

sharing not allowed

sharing not allowed

sharing not allowed

Mode=R
Share=R

sharing not allowed

OK

OK

sharing not allowed

sharing not allowed

Mode=R
Share=R/W

sharing not allowed

OK

OK

sharing not allowed

OK

Mode=R/W
Share=None

sharing not allowed

sharing not allowed

sharing not allowed

sharing not allowed

sharing not allowed

Mode=R/W
Share=R

sharing not allowed

sharing not allowed

OK

sharing not allowed

sharing not allowed

When sharing is enabled (that is, when the database is opened with shared read or shared read/write), the Data Manager server synchronizes access to the database. The synchronization is done at the database level. Each schema database function call is atomic, thus providing data integrity at the function level. Since the Data Manager doesn't support multiple applications writing to the same database, it doesn't have to deal with issues around concurrent updates.

As discussed in "Reading Data," you can access record values by copy or by reference. When using the "by reference" functions to read record values from a database opened with shared write access, the Data Manager maintains a reference count of the number of active readers for each row. Applications can only modify a row if its reference count is 0—that is, if no one is currently reading that row. This protects the row against concurrent updates.

Whenever a schema database row is modified, added, deleted, or removed, the row index and any sort indices are automatically updated. This can only be done when the database is opened with write access. If another process has concurrently opened the same database with read access, however, it too will be affected by the changes to the sort indices. This is not a problem, however, since cursor shield the application from changes like this.

Working with Non-Schema Databases ^TOP^

Schema databases impose a structure upon the data, organizing it into tables, rows, and columns. Non-schema databases, on the other hand, impose less overhead and are significantly more flexible. Of course, your application generally has to do more work when dealing with non-schema databases, since your application is entirely responsible for interpreting the structure of each record.

Non-schema databases can either be record or resource databases. A record database holds application data. Each record can be structured in any fashion that the application desires. Resource databases are used to contain executable code, application resources, and the like.

In Palm OS Cobalt, non-schema databases come in two "flavors": classic and extended. Classic databases are provided for compatibility with previous versions of Palm OS (and with applications running on Palm OS Cobalt through PACE). Because of a couple of long-standing limitations, however, unless your application needs this level of compatibility it should use extended or schema databases instead. Both classic and extended databases can be either record or resource databases.

Extended databases are very similar to classic databases. They have the following differences:

Classic Database

Extended Database

Records cannot exceed 64 KB in size.

Records can be more than 64 KB in length.

Are uniquely identified by name.

Are uniquely identified by a combination of name and creator ID.

Data should be stored in big-endian format (for 68K compatibility).

Data can be stored in either big-endian or little-endian format.

Because the two non-schema database types are so similar, you use many of the same functions when working with either database type. One of the most important functions that works only on extended databases is DmCreateDatabase(). To create a classic database, you use DmCreateDatabaseV50() instead. Other functions behave differently depending on whether you are operating on a classic or an extended database, and still others—such as DmFindDatabase()—use parameters to control their behavior in this area.

Structure of a Non-Schema Database Header ^TOP^

A non-schema database header consists of some basic database information and a list of records in the database. Each record entry in the header has the MemHandle of the record, 8 attribute bits, and a 3-byte unique ID for the record.

This section provides information about database headers, discussing these topics:


IMPORTANT: Expect the database header structure to change in the future. Use the API to work with database structures.

Database Header Fields

The database header has the following fields:

  • The name field holds the name of the database.
  • The attributes field has flags for the database.
  • The version field holds an application-specific version number for that database.
  • The modificationNumber is incremented every time a record in the database is deleted, added, or modified. Thus applications can quickly determine if a shared database has been modified by another process.
  • The appInfoID is an optional field that an application can use to store application-specific information about the database. For example, it might be used to store user display preferences for a particular database.
  • The sortInfoID is another optional field an application can use for storing the ID of a sort table for the database.
  • The type and creator fields are each 4 bytes and hold the database type and creator. The system uses these fields to distinguish application databases from data databases and to associate data databases with the appropriate application.
  • The numRecords field holds the number of record entries stored in the database header itself. If all the record entries cannot fit in the header, then nextRecordList identifies a recordList that contains the next set of records.

    Each record entry stored in a record list has three fields and is 8 bytes in length. Each entry has the MemHandle of the record which takes up 4 bytes: 1 byte of attributes and a 3-byte unique ID for the record. The attribute field, shown in Figure 2.1, is 8 bits long and contains 4 flags and a 4-bit category number. The category number is used to place records into user-defined categories like "business" or "personal."

Figure 2.1  Record Attributes

Structure of a Record Entry in a Non-Schema Database Header

Each record entry has the MemHandle of the record, 8 attribute bits, and a 3-byte unique ID for the record.

The unique ID must be unique for each record within a database. It remains the same for a particular record no matter how many times the record is modified. It is used during synchronization with the desktop to track records on the Palm Powered handheld with the same records on the desktop system.

The record attribute bits are set in the following circumstances:

  • When the user deletes or archives a record the delete bit is set. Note, however, that its entry in the database header remains until the next synchronization with the PC.
  • The dirty bit is set whenever a record is updated.
  • The busy bit is set when an application currently has a record locked for reading or writing.
  • The secret bit is set for records that should not be displayed before the user password has been entered on the handheld.

When a user "deletes" a record on a Palm Powered handheld, the record's data chunk is freed, the MemHandle stored in the record entry is set to 0, and the delete bit is set in the attributes. When the user archives a record, the deleted bit is also set but the chunk is not freed and the MemHandle is preserved. This way, the next time the user synchronizes with the desktop system, the desktop computer can quickly determine which records to delete (since their record entries are still around on the handheld). In the case of archived records, the conduit can save the record data on the desktop before it permanently removes the record entry and data from the handheld. For deleted records, the conduit just has to delete the same record from the desktop before permanently removing the record entry from the handheld.

Working with Non-Schema Databases ^TOP^

Using the Data Manager is similar to using a traditional file manager, except that the data is broken down into multiple records instead of being stored in one contiguous chunk. To create or delete a database, call DmCreateDatabase() (or, for classic databases, DmCreateDatabaseV50()) and DmDeleteDatabase().

To open a database for reading or writing, you must first get the database ID. Calling DmFindDatabase() searches for a database by name and type (schema, extended, or classic) and returns its database ID.

After determining the database ID, you can open the database for read-only or read/write access. When you open a database, the system locks down the database header and returns a reference to a database access structure, which tracks information about the open database and caches certain information for optimum performance. The database access structure is a relatively small structure (less than 100 bytes) allocated in the dynamic heap that is disposed of when the database is closed.

Call DmDatabaseInfo(), DmSetDatabaseInfo(), and DmDatabaseSize() to query or set information about a database, such as its name, size, creation and modification dates, attributes, type, and creator.

Call DmGetRecord(), DmQueryRecord(), and DmReleaseRecord() when viewing or updating a database.

  • DmGetRecord() takes a record index as a parameter, marks the record busy, and returns a handle to the record. If a record is already busy when DmGetRecord() is called, an error is returned.
  • DmQueryRecord() is faster if the application only needs to view the record; it doesn't check or set the busy bit, so it's not necessary to call DmReleaseRecord() when finished viewing the record.
  • DmReleaseRecord() clears the busy bit, and updates the modification number of the database and marks the record dirty if the dirty parameter is true.

To resize a record to grow or shrink its contents, call DmResizeRecord(). During reallocation, the handle to the record may change. DmResizeRecord() returns the new handle to the record.

To add a new record to a database, call DmNewRecord(). This function can insert the new record at any index position, append it to the end, or replace an existing record by index. It returns a handle to the new record.

There are three methods for removing a record: DmRemoveRecord(), DmDeleteRecord(), and DmArchiveRecord().

  • DmRemoveRecord() removes the record's entry from the database header and disposes of the record data.
  • DmDeleteRecord() also disposes of the record data, but instead of removing the record's entry from the database header, it sets the deleted bit in the record entry attributes field and clears the local chunk ID.
  • DmArchiveRecord() does not dispose of the record's data; it just sets the deleted bit in the record entry.

Both DmDeleteRecord() and DmArchiveRecord() are useful for synchronizing information with a desktop computer. Since the unique ID of the deleted or archived record is still kept in the database header, the desktop computer can perform the necessary operations on its own copy of the database before permanently removing the record from the Palm OS database.

Call DmGetRecordAttr(), DmGetRecordCategory(), and DmGetRecordID() to retrieve the record information stored in the database header, and DmSetRecordAttr(), DmSetRecordCategory(), and DmSetRecordID() to set this information. Typically, applications set or retrieve the category of a record, which is stored in the lower four bits of the record's attribute field.

To move records from one index to another or from one database to another, call DmMoveRecord(), DmAttachRecord(), and DmDetachRecord(). DmDetachRecord() removes a record entry from the database header and returns the record handle. Given the handle of a new record, DmAttachRecord() inserts or appends that new record to a database or replaces an existing record with the new record. DmMoveRecord() is an optimized way to move a record from one index to another in the same database.

Record Attributes ^TOP^

Table 2.4 lists the functions that you use to get and set a non-schema database record's ID, category, and attributes.

Table 2.9  Functions used to access record information 

Resource Databases ^TOP^

Structure of a Resource Database Header

A resource database header consists of some general database information followed by a list of resources in the database. The first portion of the header is identical in structure to a normal database header (see "Structure of a Non-Schema Database Header"). Resource database headers are distinguished from normal database headers by the dmHdrAttrResDB bit in the attributes field.


IMPORTANT: Expect the resource database header structure to change in the future. Use the API to work with resource database structures.
  • The name field holds the name of the resource database.
  • The attributes field has flags for the database and always has the dmHdrAttrResDB bit set.
  • The modificationNumber is incremented every time a resource in the database is deleted, added, or modified. Thus, applications can quickly determine if a shared resource database has been modified by another process.
  • The appInfoID and sortInfoID fields are not normally needed for a resource database but are included to match the structure of a regular database. An application may optionally use these fields for its own purposes.
  • The type and creator fields hold 4-byte signatures of the database type and creator as defined by the application that created the database.
  • The numResources field holds the number of resource info entries that are stored in the header itself. In most cases, this is the total number of resources. If all the resource info entries cannot fit in the header, however, then nextResourceList has the chunkID of a resourceList that contains the next set of resource info entries.

Each 10-byte resource info entry in the header has the resource type, the resource ID, and the ID of the Memory Manager chunk that contains the resource data.

Working with Resource Databases

You can create, delete, open, and close resource databases with the functions used to create normal record-based databases (see "Working with Non-Schema Databases"). This includes all database-level (not record-level) functions in the Data Manager such as DmCreateDatabase(), DmDeleteDatabase(), DmDatabaseInfo(), and so on.

When you create a new database using DmCreateDatabase(), the type of database created (record or resource) depends on the value of the resDB parameter. If set, a resource database is created and the dmHdrAttrResDB bit is set in the attributes field of the database header. Given a database header ID, an application can determine which type of database it is by calling DmDatabaseInfo() and examining the dmHdrAttrResDB bit in the returned attributes field.

Once a resource database has been opened, an application can read and manipulate its resources by using the resource-based access functions of the Resource Manager. Generally, applications use the DmGetResource() and DmReleaseResource() functions.

DmGetResource() searches a specified resource database and returns a handle to a resource, given the resource type and ID.


NOTE: Previous versions of Palm OS had the notion of a resource "search chain", the set of all open resource databases that were searched when looking for a specified resource. This concept isn't really supported in Palm OS 6, except for compatibility purposes: applications that run under PACE will work as originally designed. To support this level of compatibility, the Data Manager contains a number of deprecated functions that provide the old functionality. These functions are: DmOpenDatabaseV50(), DmOpenDBNoOverlayV50(), DmOpenDatabaseByTypeCreatorV50(), DmGetResourceV50(), and DmGet1ResourceV50(). Because these functions are deprecated, applications written for Palm OS 6 should not rely upon them.

DmReleaseResource() should be called as soon as an application finishes reading or writing the resource data. To resize a resource, call DmResizeResource(), which accepts a handle to a resource and reallocates the resource. It returns the handle of the resource, which might have been changed.

The remaining Resource Manager functions are usually not required for most applications. These include functions to get and set resource attributes, move resources from one database to another, get resources by index, and create new resources. Most of these functions reference resources by index to optimize performance. When referencing a resource by index, the DmOpenRef of the open resource database that the resource belongs to must also be specified. Call DmSearchResourceOpenDatabases() to find a resource by type and ID or by pointer by searching in all open resource databases opened by the process. Note that this function does not search resource databases opened in other processes.

To get the DmOpenRef of the topmost open resource database, call DmNextOpenResDatabase() and pass NULL as the current DmOpenRef. To find out the DmOpenRef of each successive database, call DmNextOpenResDatabase() repeatedly with each successive DmOpenRef.

Given the access pointer of a specific open resource database, DmFindResource() can be used to return the index of a resource, given its type and ID. DmFindResourceType() can be used to get the index of every resource of a given type. To get a resource handle by index, call DmGetResourceByIndex().

To determine how many resources are in a given database, call DmNumResources(). To get and set attributes of a resource including its type and ID, call DmResourceInfo() and DmSetResourceInfo(). To attach an existing data chunk to a resource database as a new resource, call DmAttachResource(). To detach a resource from a database, call DmDetachResource().

To create a new resource, call DmNewResource() and pass the desired size, type, and ID of the new resource. To delete a resource, call DmRemoveResource(). Removing a resource disposes of its data chunk and removes its entry from the database header.

Overlays

Resource databases (and only resource databases) can have overlay databases associated with them; these localization overlays provide a method of localizing a software module without requiring a recompile or modification of the software. Each overlay database is a separate resource database that provides an appropriately-localized set of resources for a single software module (the base database) and a single target locale (language and country).

When a resource database is opened, the Data Manager looks for an overlay matching the base database and the current locale. When searching for an overlay database, the Data Manager first looks in RAM. If an appropriate overlay database isn't found there for the specified base database and target locale, it then tries to locate one in ROM.

Most of the locale APIs are declared in the Locale Manager, which is documented in Exploring Palm OS: Text and Localization. The Data Manager does provide a few functions, however, that let you get and set the locale that is used when opening an overlay, that determines an overlay database's locale, and that identifies the proper overlay database given the name of a base database and a locale.


NOTE: There is no system support for letting the user pick the language of a given application. A separate application—the "language picker"—lets the user change the Data Manager's overlay locale. This application sets the overlay locale indirectly, by changing the system locale and thus forcing a soft reset.

The Data Manager's overlay locale is a global setting that applies to all processes and threads. The overlay locale is initialized to be the same as the system locale following a soft reset. After the overlay locale is changed by calling DmSetOverlayLocale(), whenever the Data Manager needs to automatically open an overlay it uses the specified locale. If no valid overlay exists for that overlay locale, the Data Manager uses the fallback overlay locale instead.

You set the Data Manager's overlay locale with DmSetOverlayLocale(), and you get it with DmGetOverlayLocale(). Similarly, set the fallback overlay locale by calling DmSetFallbackOverlayLocale() and get it by calling DmGetFallbackOverlayLocale().

For a given overlay database, you can determine its locale by passing the overlay database name and a pointer to an LmLocaleType structure to DmGetOverlayDatabaseLocale(). Upon return, the LmLocaleType structure contains the overlay database's locale.

To locate the overlay database for a given base database, pass the name of the base database and an LmLocaleType structure indicating the desired locale to DmGetOverlayDatabaseName(). It will return the name of the overlay database for the specified base database and locale. You can pass NULL instead of a pointer to an LmLocaleType structure to obtain the overlay database name for the base database and the current locale.

Overlay Signature Verification

If the base database is signed, then the overlay database must also be signed, and its signature must be validated using a certificate ID that comes from the base database's 'sign' resource. More specifically,

  • The base database's 'sign' resource must contain one or more overlay certificate ID values.
  • The overlay database must contain a 'sign' resource.
  • One of the signatures in the overlay database's 'sign' resource must use a certificate ID that comes from the base database's 'sign' resource list of overlay certificate ID values, and this signature must validate the overlay database.

Data Manager Tips ^TOP^

Working properly with databases makes your application run faster and synchronize without problems. Follow these suggestions:

  • Database names can be up to 31 characters in length, and on the handheld can be composed of any valid 7-bit ASCII characters (only). Some conduits—such as PalmSource's backup conduit—use a name-mangling scheme to preserve case-sensitive database names when generating backup filenames on Microsoft Windows. Other conduits may not do this, however, so you may want to avoid filenames that depend on case for distinction.

IMPORTANT: Previous versions of Palm OS didn't enforce the requirement that database names be composed only of 7-bit ASCII characters. Palm OS Cobalt requires that this be so.

By convention, filename extensions are not used on the handheld. Instead, database types are used to identify databases as members of a certain type or class. Note that when the PalmSource backup conduit transfers a file to the desktop, it automatically appends one of the following extensions to the database filename:

  • PRC for resource databases (classic or extended)
  • PDB for non-schema record databases (classic or extended)
  • SDB for non-secure schema databases
  • SSD for secure schema databases
  • VLT for vault databases used to hold security information (HEKs, rules, tokens, and the like)

The extension is removed when the file is transferred back to the handheld.

  • When the user deletes a record from a database, call DmDeleteRecord() (or DbDeleteRow()) to remove all data from the record, not DmRemoveRecord() (or DbRemoveRow()) to remove the record itself. That way, the desktop application can retrieve the information that the record is deleted the next time there is a HotSync operation.

    Note: If your application doesn't have an associated conduit, call DmRemoveRecord() to completely remove the record.

  • Keep data in database records compact. To avoid performance problems, Palm OS databases are not compressed, but all data are tightly packed. This pays off for storage and during HotSync operations.
  • All records in a non-schema database should be of the same type and format. This is not a requirement, but is highly recommended to avoid processing overhead.
  • Be sure your application modifies the flags in the database header appropriately when the user deletes or otherwise modifies information. This flag modification is only required if you're synchronizing with the PalmSource PIM applications, but should likely be done with any database that is to be sync'd by a conduit.
  • Don't display deleted records.
  • Call DmSetDatabaseInfo() when creating a non-schema database to assign a version number to your application. Databases default to version 0 if the version isn't explicitly set.
  • Call DmDatabaseInfo() to check the non-schema database version at application start-up.

File Streaming Layer ^TOP^

The file streaming functions add a layer on top of the classic database functions and let you work with a Palm OS database using a more familiar set of operations. File streams allow you to read, write, seek to a specified offset, truncate, and do everything else you'd expect to do with a desktop-style file.

Other than backup and restore, Palm OS does not provide direct HotSync support for file streams.

The use of double-buffering imposes a performance penalty on file streams that may make them unsuitable for certain applications. Record-intensive applications tend to obtain better performance from the Data Manager.

Using the File Streaming API ^TOP^

The File Streaming API is derived from the C programming language's <stdio.h> interface. Any C book that explains the <stdio.h> interface should serve as a suitable introduction to the concepts underlying the Palm OS File Streaming API. This section provides only a brief overview of the most commonly used file streaming functions.

The FileOpen() function opens or creates a file (an extended database; use FileOpenV50() to open or create a classic database), and the FileRead() function reads it. The semantics of FileRead() and FileWrite() are just like their <stdio.h> equivalents, the fread() and fwrite() functions. The other <stdio.h> functions have obvious analogs in the File Streaming API as well.

For example,


theStream = FileOpen("KillerAppDataFile", 'KILR',
   'KILD', fileModeReadOnly, &err);  

As on a desktop, the filename is the unique item. The creator ID and file type are for informational purposes and your code may require that an opened file have the correct type and creator.


IMPORTANT: Previous versions of Palm OS didn't enforce the requirement that database names passed to FileOpen() be composed only of 7-bit ASCII characters. Palm OS Cobalt requires that this be so.

Normally, the FileOpen() function returns an error when it attempts to open or replace an existing stream having a type and creator that do not match those specified. To suppress this error, pass the fileModeAnyTypeCreator selector as a flag in the openMode parameter to the FileOpen() function.

To read data, use the FileRead() function as in the following example:


FileRead(theStream, &buf, objSize, numObjs, &err);  

To free the memory used to store stream data as the data is read, you can use the FileControl() function to switch the stream to destructive read mode. This mode is useful for manipulating temporary data; for example, destructive read mode would be ideal for adding the objects in a large data stream to a database when sufficient memory for duplicating the entire file stream is not available. You can switch a stream to destructive read mode by passing the fileOpDestructiveReadMode selector as the value of the op parameter to the FileControl() function.

The FileDmRead() function can read data directly into a Data Manager chunk for immediate addition to a Palm OS database.