Friday, July 13, 2007

thumbnail

SQL Server DO's and DON'Ts

SQL Server DO's and DON'Ts

So, you are now the leader of a SQL Server based project and this is your first one, perhaps migrating from Access. Or maybe you have performance problems with your SQL Server and don't know what to do next. Or maybe you simply want to know of some design guidelines for solutions using SQL Server and designing Database Access Layers (DAL): this article is for you.

Even if you are not using SQL Server, most of these design guidelines apply to other DBMS, too: Sybase is a very similar environment for the programmer, and Oracle designs may benefit from this too. I won't show here how to use specific T-SQL tricks, nor won't give you miracle solutions for your SQL Server problem. This is by no means a complete, closed issue. What I intend to do is give you some advices for a sound design, with lessons learned through the last years of my life, seeing the same design errors being done again and again.

DO know your tools.

Please, don't underestimate this tip. This is the best of all of those you'll see in this article. You'd be surprised of how many SQL Server programmers don't even know all T-SQL commands and all of those effective tools SQL Server has.

"What? I need to spend a month learning all those SQL commands I'll never use???" you might say. No, you don't need to. But spend a weekend at MSDN and browse through all T-SQL commands: the mission here is to learn a lot of what can and what can't be done. And, in the future, when designing a query, you'll remember "Hey, there's this command that does exactly what I need", and then you'll refer again to MSDN to see its exact syntax.

In this article I'll assume that you already know the T-SQL syntax or can find about it on MSDN.

DON'T use cursors

Let me say it again: DON'T use cursors. They should be your preferred way of killing the performance of an entire system. Most beginners use cursors and don't realize the performance hit they have. They use memory; they lock tables in weird ways, and they are slow. Worst of all, they defeat most of the performance optimization your DBA can do. Did you know that every FETCH being executed has about the same performance of executing a SELECT? This means that if your cursor has 10,000 records, it will execute about 10,000 SELECTs! If you can do this in a couple of SELECT, UPDATE or DELETE, it will be much faster.

Beginner SQL programmers find in cursors a comfortable and familiar way of coding. Well, unfortunately this lead to bad performance. The whole purpose of SQL is specifying what you want, not how it should be done.

I've once rewritten a cursor-based stored procedure and substituted some code for a pair of traditional SQL queries. The table had only 100,000 records and the stored procedure used to take 40 minutes to process. You should see the face of the poor programmer when the new stored procedure took 10 seconds to run!

Sometimes it's even faster to create a small application that gets all the data, proccess it and update the server. T-SQL was not done with loop performance in mind.

If you are reading this article, I need to mention: there is no good use for cursors; I have never seen cursors being well used, except for DBA work. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?

DO normalize your tables

There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. Often I see programmers de-normalizing databases because "this will be slow". And, more frequent than the inverse, the resulting design is slower. DBMSs were designed to be used with normalized databases, so design with normalization in mind.

DON'T SELECT *

This is hard to get used, I know. And I confess: often I use it; but try to specify only the columns you'll need. This will:

  1. Reduce memory consumption and network bandwidth
  2. Ease security design
  3. Gives the query optimizer a chance to read all the needed columns from the indexes

DO know how your data will be/is being acessed

A robust index design is one of the good things you can do for your database. And doing this is almost an art form. Everytime you add an index to a table, things get faster on SELECT, but INSERT and DELETE will be much slower. There's a lot of work in building and mantaining indexes. If you add several indexes to a table to speed your SELECT, you'll soon notice locks being held for a long time while updating indexes. So, the question is: what is being done with this table? Reading or Updating data? This question is tricky, specially with the DELETE and UPDATE, because they often involve a SELECT for the WHERE part and after this they update the table.

DON'T create an index on the "Sex" column

This is useless. First, let's understand how indexes speed up table access. You can see indexes as a way of quickly partitioning a table based on a criteria. If you create an index with a column like "Sex", you'll have only two partitions: Male and Female. What optimization will you have on a table with 1,000,000 rows? Remember, mantaining an index is slow. Always design your indexes with the most sparse columns first and the least sparse columns last, e.g, Name + Province + Sex.

DO use transactions

Specially on long-running queries. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash.

DO beware of deadlocks

Always access your tables on the same order. When working with stored procedures and transactions, you may find this soon. If you lock the table A then table B, always lock them in this very same order in all stored procedures. If you, by accident, lock the table B and then table A in another procedure some day you'll have a deadlock. Deadlocks can be tricky to find if the lock sequence is not carefully designed.

DON'T open large recordsets

A common request on programming forums is: "How can I quickly fill this combo with 100,00 items?". Well, this is an error. You can't and you shouldn't. First, your user will hate browsing through 100,000 records to find the right one. A better UI is needed here, because you should ideally show no more that 100 or 200 records to your users.

DON'T use server side cursors

Unless you know what your are doing. Client side cursors often (not always) put less overhead on the network and on the server, and reduce locking time.

DO use parametrized queries

Sometimes I see in programming forums, questions like: "My queries are failing with some chars, e.g. quotes. How can I avoid it?". And a common answer is: "Replace it by double quotes". Wrong. This is only a workaround and will still fail with other chars, and will introduce serious security bugs. Besides this, it will trash the SQL Server caching system, which will cache several similar queries, instead of caching only one. Learn how to use parameterized queries (in ADO, through the use of the Command Object, or in ADO.NET the SqlCommand) and never have these problems again.

DO always test with large databases

It's a common pattern programmers developing with a small test database, and the end user using large databases. This is an error: disk is cheap, and performance problems will only be noticed when it's too late.

DON'T import bulk data with INSERT

Unless strictly necessary. Use DTS or the BCP utility and you'll have both a flexible and fast solution.

DO beware of timeouts

When querying a database, the default timeout is often low, like 15 seconds or 30 seconds. Remember that report queries may run longer than this, specially when your database grows.

DON'T ignore simultaneous editing

Sometimes two users will edit the same record at the same time. When writing, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. If possible, merge changes. If there is a conflict, prompt the user for some action.

DON'T do SELECT max(ID) from Master when inserting in a Detail table.

This is another common mistake, and will fail when two users are inserting data at the same time. Use one of SCOPE_IDENTITY, IDENT_CURRENT, and @@IDENTITY. Avoid @@IDENTITY if possible because it can introduce some nasty bugs with triggers.

DO Avoid NULLable columns

When possible. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. The DAL will be harder to code, too, because everytime you access this column you'll need to check

I'm not saying that NULLs are the evil incarnation, like some people say. I believe they can have good uses and simplify coding when "missing data" is part of your business rules. But sometimes NULLable columns are used in situations like this:

CustomerName1
CustomerAddress1
CustomerEmail1
CustomerName2
CustomerAddress2
CustomerEmail3
CustomerName1
CustomerAddress2
CustomerEmail3

This is horrible. Please, don't do this, normalize your table. It will be more flexible and faster, and will reduce the NULLable columns.

DON'T use the TEXT datatype

Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better.

DON'T use temporary tables

Unless strictly necessary. Often a subquery can substitute a temporary table. They induce overhead and will give you a big headache when programming under COM+ because it uses a database connection pool and temporary tables will last forever. In SQL Server 2000, there are alternatives like the TABLE data type which can provide in-memory solutions for small tables inside stored procedures too.

DO learn how to read a query execution plan

The SQL Server query analyzer is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it.

DO use referential integrity

This can be a great time saver. Define all your keys, unique constraints and foreign keys. Every validation you create on the server will save you time in the future.

Conclusion

As I've said before, this is by no means a complete SQL Server performance and best practices guide. This would take a complete book to cover. But I really believe that this is a good start, and if you follow these practices, surely you will have much less trouble in the future.

Wednesday, July 11, 2007

thumbnail

1Z0-042: Creating Oracle Database 10g

OCP: Oracle certified Professional

1Z0-042: Creating Oracle Database 10g

1.1 Question

Your database is running in the shared server mode. You want to ensure that the memory allocated to the shared pool is completely used by the application users and not by RMAN processes or any other I/O server processes.

Which component of the Shared Global Area (SGA) should be allocated memory to achieve the objective?

· java pool

· log buffer

· large pool

· buffer cache

1.2 Anwser

large pool

1.3 Explanation

The large pool should be allocated memory to ensure that the memory allocated to the shared pool is completely used by the application users and not by RMAN processes or any other I/O server processes. Increasing the size of the shared pool, setting up a reserved area, and pinning PL/SQL packages are all effective methods of improving the performance of the shared pool.

However, the performance of the shared pool can be negatively impacted by SQL-intensive operations, such as those that occur when using multiple I/O server process and Oracle's Recovery Manager (RMAN) utility. The large pool can be configured manually to allocate memory to I/O server processes and RMAN processes. By doing this, the memory allocated to the shared pool will not be consumed by I/O server processes or RMAN processes. Instead, it will be available to the application processes.

Allocating memory to the java pool will not ensure that the memory allocated to the shared pool is completely used by application users and not by RMAN processes or any other I/O server processes. The java pool is a specific area in the SGA and is used to run Java-specific applications.

Allocating memory to the log buffer will not ensure that the memory allocated to the shared pool is completely used by the application users and not by the RMAN processes or any other I/O server processes. The memory allocated to the log buffer is not used by the RMAN processes. The larger the size of the redo log buffer, the less likely it is for the user server process to experience a wait when trying to place redo entries into the log buffer.

Allocating memory to the buffer cache will not ensure that the memory allocated to the shared pool is completely used by the application users and not by the RMAN processes or any other I/O server processes. The memory allocated to the buffer cache is not used by RMAN processes. The larger the size of the buffer cache, the less likely it is for cached buffers to be moved out

of the cache by the least recently used (LRU) list.

2.1 Question

Which background process and associated database component guarantees that committed data is saved even when the changes have not been recorded in the datafiles?

· CKPT and control file

· LGWR and online redo log files

· DBWn and archived redo log files

· DBWn and database buffer cache

2.2 Anwser

LGWR and online redo log files

2.3 Explanation

The log writer process, LGWR, and online redo log files guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The log writer process writes the blocks contained in the redo log buffer of the SGA to the online redo log files. The log writer process also writes the buffers to the online redo log files when a user transaction is committed. LGWR writes the committed data to the online redo log files, thus guaranteeing that the committed data is saved even if it has not been written to the datafiles.

The checkpoint process, CKPT, and control file do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. Checkpoints help to reduce the time required for instance recovery. A checkpoint is an event that signals DBWn to flush the modified data from the buffer cache to the disk, and CKPT updates the control file and datafiles. At checkpoints, the modified blocks from the database buffer cache are written to the datafiles by DBWn. The data blocks modified by a transaction will be written to the datafiles even if a transaction has not been committed by the user, and a checkpoint is initiated before the user commits the transaction. The control file is used to record structural changes in the database.

The database writer processes, DBWn, and archived redo log files do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The DBWn process writes the contents of the dirty buffers contained in the buffer cache to the datafiles. The archived redo log files are used for database recovery and are considered offline redo log files.

The database writer processes, DBWn, and database buffer cache do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The DBWn process writes the contents of the dirty buffers contained in the buffer cache to the datafiles. The database buffer cache is the area of memory that caches the database data, containing blocks from the datafiles that have been read recently.

3.1 Question

Which three statements correctly describe the Manage Templates option available with DBCA? (Choose three.)

· It allows you to create a template based on another template.

· It allows you to create a database with a structure similar to an existing database.

· It allows you to create a template that contains only the data of an existing database.

· It allows you to create a template based on all the tablespaces of an existing database.

· It allows you to create a template that contains only the structure of an existing database.

· It allows you to create a template that contains the data as well as structure of an existing database.

3.2 Anwser

· It allows you to create a template based on another template.

· It allows you to create a template that contains only the structure of an existing database.

· It allows you to create a template that contains the data as well as structure of an existing database.

3.3 Explanation

The Manage Templates option available with DBCA allows you to:

· Create a template based on another template.

· Create a template that contains only the structure of the database.

· Create a template that contains the data as well as structure of the database.

The Manage Templates option in DBCA is accessed on the Operations page in DBCA. DBCA uses templates, which are XML files, when creating databases. Each template can include database options, and other specifics of the database, such as initialization parameters. Using these templates makes it easier to create and clone databases.

Using the Manage Templates option in DBCA, you cannot create a database. It can only be used to create a template from an existing template or an existing database. This template can then be used to create the database.

Using the Manage Templates option in DBCA, you cannot create a template that contains only the data of an existing database. You can create a template that contains the data as well as structure of an existing database.

Using the Manage Templates option in DBCA, you cannot create a template based on all the tablespaces of an existing database. A template based on tablespaces in an existing database cannot be created.

4.1 Question

You are required to create a new database using DBCA. The database is required to store historical data pertaining to the last 20 years for a multinational bank. This data is generally accessed to create reports needed in different quarters of the year. In addition, new data is inserted into the database at the end of every month.

Considering these requirements, which template would be the best template to use to create the database?

· Data Warehouse

· General Purpose

· Custom Database

· Transaction Processing

4.2 Anwser

Data Warehouse

4.3 Explanation

In this scenario, it would be best to use the Data Warehouse template because you have a large volume of historical data. This historical data is typically read-only and used generally to produce reports.

The General Purpose template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template to create the database. The General Purpose template combines features of the Data Warehouse and Transaction Processing templates and is usually used when you are creating a database for which you are not sure of the kind of data that will be stored.

The Custom Database template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template. The Custom Database template is generally used for scenarios that are more complex. Using the Custom Database template, you can explicitly define the options for the structure of the database that is to be created.

The Transaction Processing template is generally used in situations where the number of concurrent users is higher and the data is accessed and updated more often, such as OLTP systems

5.1 Question

You have installed Oracle Enterprise Manager 10g Database Control on your computer to manage your production database located on a remote host computer through a Web-enabled interface.

Which component is NOT a component of Oracle Enterprise Manager 10g Database Control?

· Oracle Management Agent

· Oracle Management Service

· Oracle Management Interface

· Oracle Management Repository

5.2 Anwser

Oracle Management Interface

5.3 Explanation

The Oracle Management Interface is not part of Oracle Enterprise Manager 10g Database Control. Oracle Enterprise Manager 10g Database Control consists of the following components:

· Oracle Management Agent

· Oracle Management Service

· Oracle Management Repository

All of the other options are incorrect because they are components of Oracle Enterprise Manager 10g Database Control.

6.1 Question

You are creating a database using DBCA. Which options can be configured when creating a database using DBCA? (Choose all that apply.)

· character sets

· memory sizing

· database block size

· connection mode

· maximum number of users in the database

· default profile allocation for users

6.2 Anwser

character sets

memory sizing

database block size

connection mode

6.3 Explanation

When creating a database using DBCA, you can configure the character sets to be used by the database, the memory sizing option, the database block size, and the connection mode used to connect to the server. When you create a database using DBCA and select the Create Database link on the DBCA Operations page, a wizard is started. One of the steps of the wizard allows you to set initialization parameter values for these, as well as other parameters. The character sets are specific to the territory of the database and language used by the database. The memory sizing option includes the distribution of physical memory into different components of the Oracle database. The block-sizing option configures the database block size to be used by the Oracle database that is being created. The connection mode configures which connection mode, shared server or dedicated server, will be used to connect to the server. DBCA can also be used to schedule database backups on a regular basis.

You cannot configure the maximum number of users in the database when using DBCA to create a database. However, the maximum number of users in the database can be configured at the database level by using the initialization parameter

LICENSE_MAX_USERS.

You cannot configure the default profile allocation for users when using DBCA to create a database. This is configured at the database level when users are created or by using the DEFAULT profile.

7.1 Question

Your database server is running in shared server mode. Which component is a component of the Program Global Area (PGA) when the database is in shared server mode?

· shared pool

· stack space

· user session data

· memory structures

· cursor state information

7.2 Anwser

stack space

7.3 Explanation

In shared server mode, only stack space is a component of the PGA. The other components that are the user session data, cursor state information, shared pool, and memory structures are all part of the System Global Area (SGA). When the database server is running in the shared server mode, the PGA is not a part of the SGA, and the PGA only contains stack space. The other components like user session data, memory structures, and cursor state information are part of the SGA.

The shared pool is not a part of the PGA in shared server mode. It is a part of the SGA.

The user session data is not part of the PGA. It is a part of the SGA.

The memory structures are not a part of the PGA, but rather part of the SGA.

The cursor state information is stored in the SGA when the database is in shared server mode.

8.1 Question

You are using an spfile to start the database. The maximum number of users in your database has been set to 150. Because the number of users has already reached the maximum limit, you are not able to create more users.

Which statement should you issue to increase the maximum number of users in this database and keep the change persistent without affecting users who are connected to the database?

· ALTER SYSTEM SET LICENSE_MAX_USERS=200;

· ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200;

· ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=SPFILE;

· ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=MEMORY;

· ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200 SCOPE=SPFILE;

8.2 Anwser

ALTER SYSTEM SET LICENSE_MAX_USERS=200;

8.3 Explanation

You should use the ALTER SYSTEM SET LICENSE_MAX_USERS=200; statement. To make the changes immediate and persistent across an instance that is using an spfile, you should use the clause SCOPE=BOTH or accept the default, which is SCOPE=BOTH, if you are using an spfile to start the instance. The maximum number of users in a database is set using the initialization parameter LICENSE_MAX_USERS.

You should not use the ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200; statement because the parameter LICENSE_MAX_SESSIONS does not increase the number of users that can exist in a database. The LICENSE_MAX_SESSIONS parameter specifies the maximum number of user sessions that can be created in a database at a time.

You should not use the ALTER SYSTEM SET LICENSE_MAX_USERS=200 COPE=SPFILE; statement because this will modify the parameter only in the spfile. The modification will come into effect only at the next startup.

You should not use the ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=MEMORY; statement because this will modify the parameter only in memory. The change will not be persistent at the next startup.

You should not use the ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200 SCOPE=SPFILE; statement because the parameter LICENSE_MAX_SESSIONS does not specify the number of users that can exist in a database. The LICENSE_MAX_SESSIONS parameter specifies the maximum number of user sessions that can be created in a database at a

time.

9.1 Question

Your database is running in the ARCHIVELOG mode. After an instance crash, you start up the database by issuing the STARTUP command. Instance recovery is initiated by the SMON background process.

Which statements are true about the phases involved in the instance recovery performed by SMON? (Choose all that apply.)

· The instance recovery will recover data up to the last commit.

· The information used for instance recovery will be derived from the alert log.

· The information used for instance recovery will be derived from the control file.

· The uncommitted changes are rolled back using information in the undo segments.

· The information used for instance recovery will be derived from the data dictionary.

· The instance recovery will recover all the data entered into the database before the point of the crash.

· Only the committed changes stored in the online redo log file are applied to the affected data blocks.

· The committed and uncommitted changes stored in the online redo log are applied to the affected data blocks.

9.2 Anwser

· The instance recovery will recover data up to the last commit.

· The information used for instance recovery will be derived from the control file.

· The uncommitted changes are rolled back using information in the undo segments.

· The committed and uncommitted changes stored in the online redo log are applied to the affected data blocks.

9.3 Explanation

· The instance recovery performed by SMON goes through the following phases:

· All the committed and uncommitted data from the online redo log is applied to the affected blocks.

· The uncommitted data applied to the data blocks is rolled back using the undo information in the undo segments.

· Information, such as the online redo log sequence number, which is to be used for instance recovery is obtained from the control file.

· The instance recovery thus performed will recover the database only up to the last commit. Any uncommitted data will not be applied to the database.

· The information used for instance recovery is not derived from the alert log or the data dictionary. Instead, it is derived from the control file.

· The instance recovery will not recover the uncommitted data. It will recover data only up to the last commit.

· All the data from the redo log file, whether committed or uncommitted, will be applied to the database. The uncommitted data will be rolled back by applying undo data from the undo segments.

10.1 Question

During your routine monitoring operations on the database, you observe that there is heavy contention on data dictionary tables whenever extents are allocated or freed from a tablespace.

Which action should you take as a measure to eliminate this contention on the SYSTEM tablespace?

· Use local extent management for the SYSTEM tablespace.

· Use bigfile tablespaces in the database to avoid contention.

· Use automatic segment space management for tablespaces.

· Use local extent management for nonsystem tablespaces if not using it already.

10.2 Anwser

Use local extent management for nonsystem tablespaces if not using it already.

10.3 Explanation

To reduce contention caused on the data dictionary tables by allocation or freeing of extents, you can use local extent management for nonsystem tablespaces. If you use locally-managed tablespaces, the contention on data dictionary tables is reduced. This is because locally-managed tablespaces do not record free space or extent allocation in the data dictionary.

Using local extent management for the SYSTEM tablespace will not eliminate the contention because the contention is mainly caused by allocation and deallocation of space in the nonsystem tablespace.

Using bigfile tablespaces will not eliminate the contention on data dictionary tables. A bigfile tablespace is mainly used to store a large volume of data thereby avoiding the requirement to create several smallfile tablespaces to store the same data.

Using automatic segment space management will not eliminate contention because automatic segment space management is used to automatically manage values for storage parameters, such as PCTUSED, PCTFREE, and PCTINCREASE, for different database objects. When using automatic segment space management, the free space within the segment and used space is

tracked using bitmaps and not by using free lists. Using this technique will not reduce contention on data dictionary tables.

11.1 Question

Which stages of Structured Query Language (SQL) statement processing are performed when you issue a MERGE statement? (Choose all that apply.)

· The rows inserted are written to the datafiles.

· The data is fetched and returned to the users.

· The updated blocks are written to the datafiles.

· The changes made are recorded in the redo log files.

· The System Change Number is recorded in the control file.

· A parse tree is generated for the query if it does not already exist.

· Blocks of data are retrieved into the database buffer cache if they are not already present.

11.2 Anwser

A parse tree is generated for the query if it does not already exist.

Blocks of data are retrieved into the database buffer cache if they are not already present.

11.3 Explanation

When a MERGE statement is issued at the SQL prompt, it undergoes the following stages:

· The PARSE stage in which the query is parsed to check for details, such as the syntax of the statement, privileges on the object, and optimal execution plan.

· The BIND stage in which any variables are assigned values.

· The EXECUTE stage in which the query is actually executed. At this stage, the data required by the user's query isretrieved from the database buffer cache and returned to the user.

The option that states that the inserted rows are written to the datafiles is incorrect because the data is written to datafiles only at checkpoints.

The option that states that the data is fetched and returned to the user is incorrect because the MERGE statement does not undergo a FETCH stage. The FETCH stage is only applicable to the SELECT statement.

The option that states that the updated blocks are written to the datafiles is incorrect because the updated data is written to datafiles only at checkpoints.

The option that states that the changes are recorded in the redo log files is incorrect because the changes are written to the redo log files only when a commit is performed.

The option that states that the System Change Number is recorded in the control file is incorrect because the System Change Number (SCN) is recorded in the control file only on a commit.

12.1 Question

You are a DBA with Startel. You have a set of four databases SALESDB, FINDB, HRDB, and PROD. You also have two listeners, L1 and L2, and an application server, APPS1, located on different locations globally. You have been assigned the task of managing all these components from your office located in New York.

Which two tools should you install to administer these components centrally on the network? (Choose two.)

· Oracle Management Agent

· Oracle Enterprise Manager 10g

· Oracle Enterprise Manager 10g Grid Control Console

· Oracle Enterprise Manager 10g Application Server Control

· Oracle Enterprise Manager 10g Database Control Console

12.2 Anwser

Oracle Management Agent

Oracle Enterprise Manager 10g Grid Control Console

12.3 Explanation

To administer the databases, listeners, and Web application servers centrally, you need the Oracle Management Agent installed on all the host computers and the Oracle Enterprise Manager 10g Grid Control Console installed on the centralized computer, which you will use to administer the components located globally. The Oracle Management Agent monitors all targets on the host and sends the monitoring information to the middle-tier Management Service, and manages and maintains the host and its targets. The Oracle Enterprise Manager 10g Grid Control Console is used to administer the components located globally.

Oracle Enterprise Manager 10g does not provide any help in centrally administering databases located globally. Oracle Enterprise Manager 10g is used to monitor and manage your database.

Oracle Enterprise Manager 10g Application Server Control is not used for managing databases globally. It is used to manage Oracle Application Server 10g.

Oracle Enterprise Manager 10g Database Control Console is not used to manage databases globally. It can only be used to monitor and manage a single Oracle10g instance at a time.

13.1 Question

Users in your application complain of slow response to queries. Upon analysis, you realize that the queries are being reparsed because they are aged out quickly.

Which component of the Shared Global Area (SGA) should you modify to avoid the reparsing of queries?

· java pool

· large pool

· library cache

· redo log buffer

· data dictionary cache

· database buffer cache

13.2 Anwser

library cache

13.3 Explanation

Increasing the library cache component of the SGA will help avoid the reparsing of queries because the library cache contains the parse tree information of queries. The queries are reparsed because they are aged out too quickly from the library cache due to lack of space in the library cache. Adding more space to the library cache will ensure that the parse tree information for the queries is not aged out. Because space cannot be directly allocated to the library cache, the shared pool component of the SGA can be increased, thereby increasing the size of library cache, which is contained in the shared pool.

The java pool is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. The java pool is a specific area in the SGA that is used to run Java-specific applications.

To run these Java applications, you use the Java stored procedures in the database. The size of the java pool is determined by the JAVA_POOL_SIZE parameter specified in the initialization parameter file.

The large pool is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. A large pool, when configured, is used to store the session memory information for shared servers and is also used for some RMAN operations.

The redo log buffer is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. The redo log buffer is used to record all the changes made to the data blocks in a database. The redo log buffer is used in recovery.

The data dictionary cache is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help avoid the reparsing of queries. The data dictionary cache is a part of the shared pool, which stores the definitions of recently accessed data objects, such as tables and indexes.

The database buffer cache is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help avoid the reparsing of queries. The database buffer cache is used to cache the recently accessed data blocks that have been retrieved from the datafiles as a result of user queries on the database.

14.1 Question

You need to create a database similar to an existing database. The data and structure of the new database must be the same as the existing database.

Which action would best obtain the desired results?

· Create a database using the CREATE DATABASE statement. Then, perform a complete database import of the original database.

· Create a template of the existing database using DBCA that incorporates only the structure of the original database. Then, use this template to create the new database.

· Create a template of the existing database using DBCA that incorporates the structure and data of the original database.Then, use this template to create the new database.

· Create a template of the existing database using DBCA that incorporates only the structure of the original database and use this template to create the new database. Then, perform a complete database import of the original database.

14.2 Anwser

Create a template of the existing database using DBCA that incorporates the structure and data of the original database. Then, use this template to create the new database.

14.3 Explanation

To create a database similar to an existing database with the same structure and data, use DBCA to create a template with the same structure as well as data from the existing database. Then, use this template to create the new database. When you create a template from an existing database using the structure as well as data, this template will contain all the physical and logical characteristics of the source database. The datafiles, log files, control files, and archive redo logs will all be the same in number and size. The other structures in the database, such as the tablespaces, tables, and user schemas, will be identical to those in the source database.

Using the CREATE DATABASE statement would require more administrative effort. You would need to specify all of the needed options to create the new database with the same options as the original database. Manually creating the database and then performing an import would require more effort than using DBCA to create the database from a template.

Using DBCA to create a template with only the same structure and using this template to create the new database would create the database, but not with the same data as the original database. You would need to rebuild the data in the database.

Using DBCA to create a template with only the same structure and using this template to create the new database followed by a complete database import is not the best choice. This will increase the time needed to create an identical database because you will need to perform a complete database import after the database creation.

15.1 Question

Eric is working as a database administrator with Telstar Corporation. He has been granting SYSDBA and SYSOPER privileges to certain users who must perform certain administrative tasks, such as starting and shutting down the database remotely.

Which file determines the number of users that can be granted the SYSDBA or SYSOPER privilege?

· trace file

· control file

· alert log file

· password file

· system datafile

15.2 Anwser

password file

15.3 Explanation

The password file contains information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege.

While creating a password file, you can use the ENTRIES clause to specify the number of users who can be granted the SYSDBA or SYSOPER privilege. The password file can also be used to authenticate users who are trying to access the database remotely. The REMOTE_LOGIN_PASSWORDFILE initialization parameter can be set to provide a different authentication mechanism for the database. Setting the parameter to a value of NONE specifies that users should be authenticated by the operating system. Setting the parameter to a value of SHARED specifies that more than one database can use the same password file. Using this method, the user can be authenticated on one or more databases that use the same password file.

Setting the parameter to a value of EXCLUSIVE specifies that users are using the password file for authentication.

The trace file option is incorrect because a trace file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The trace files in an Oracle database are created for every server process and background process. Whenever the server or background process encounters an error or is terminated, the information regarding the error is written to the respective trace file. The trace file can be used to gather information regarding these server and background processes.

The control file option is incorrect because a control file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. A control file in an Oracle database contains information about the physical structure of the database. The control file is required whenever you start the database and must be available throughout the life of the instance.

The alert log file option is incorrect because the alert log file does not contain information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The alert log file in an Oracle database contains database-wide information about the errors and other important events in the database. The alert log file contains the startup and shutdown times of the database, the default parameters used to start the database, and other important information regarding the database.

The system datafile option is incorrect because a system datafile file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The system datafile in an Oracle database is a part of the SYSTEM tablespace. The SYSTEM tablespace is created in an Oracle database automatically upon database creation and constitutes the data dictionary.

16.1 Question

You have created a database using DBCA. Now, you want to change the configurations of the database using DBCA.Which option CANNOT be configured using DBCA?

· configure the database to enable daily backup

· configure the database to run as a shared server

· configure the database to use the Local naming method

· configure the database to use password file authentication

· configure the database to use Automatic Storage Management

16.2 Anwser

configure the database to use the Local naming method

16.3 Explanation

Using DBCA, you cannot configure the naming method for resolving a net service name. The naming method for the database can be configured using the net configuration files or the Net Configuration Assistant.

DBCA can be used to configure the database to enable daily backup. To enable daily backup of the database, you must check the Enable Daily Backup check box on the Management Options page. While enabling backup, you can specify the time when the daily backup will be performed and the username and password of the user performing the backup.

DBCA can be used to configure the database to run as a shared server. To configure the database to run as a shared server, you must select the appropriate connection mode in the Connection Mode tab on the Initialization Parameters page.

DBCA can be used to configure the database to use password file authentication by setting the appropriate parameters. You can set the REMOTE_LOGIN_PASSWORDFILE parameter to EXCLUSIVE or SHARED using the Initialization Parameters page to enable the database to use password file authentication.

DBCA can be used to configure the database to use Automatic Storage Management. To configure the database to use Automatic Storage Management, you must select the Automatic Storage Management (ASM) button on the Passwords and Storage page.

17.1 Question

Which activities in the database signal DBWn to write the data from the database buffer cache to datafiles? (Choose all that apply.)

· whenever there is a log switch

· whenever a user transaction is committed

· whenever the database is shut down cleanly

· whenever a table is dropped from the database

· whenever a tablespace in the database is taken offline

· whenever the server process cannot find free buffers in the database buffer cache

17.2 Anwser

whenever there is a log switch

whenever the database is shut down cleanly

whenever a tablespace in the database is taken offline

whenever the server process cannot find free buffers in the database buffer cache

17.3 Explanation

DBWn writes data to datafiles whenever there is a log switch, whenever the database is shut down cleanly, whenever a tablespace in the database is taken offline, and whenever the server process cannot find free buffers in the database buffer cache. DBWn writes data to datafiles any time a checkpoint occurs. A checkpoint occurs when you shut down the database cleanly, whenever there is a log switch and whenever a checkpoint is forced by the DBA using the ALTER SYSTEM CHECKPOINT statement. Also, an incremental checkpoint occurs when you take a tablespace in the database offline. In addition, when the server process scans the database buffer cache and is unable to find any free buffers, it signals DBWn to write the data from the database buffer cache to the datafiles.

DBWn does not write data to the datafiles when a user transaction is committed. Modified data blocks are written to the redo log buffer only on a commit.

DBWn does not write data to the datafiles when a table is dropped from the database. When dropping a table from the database, only the redo is written to the redo log buffer.

Tuesday, July 10, 2007

thumbnail

1Z0-042: Installing Oracle Database 10g Software

OCP: Oracle certified Professional

1Z0-042: Installing Oracle Database 10g Software

2007

trungnemo.blogspot.com


Content

1 Item1: (Ref:1Z0-042.1.4.1). 3

1.1 Question. 3

1.2 Anwser 3

1.3 Explanation. 3

2 Item: 2 (Ref:1Z0-042.1.4.3). 4

2.1 Question. 4

2.2 Anwser 4

2.3 Explanation. 4

3 Item: 3 (Ref:1Z0-042.1.3.1). 6

3.1 Question. 6

3.2 Anwser 6

3.3 Explanation. 6

4 Item: 4 (Ref:1Z0-042.2.4.3). 7

4.1 Question. 7

4.2 Anwser 7

4.3 Explanation. 7

5 Item: 5 (Ref:1Z0-042.2.3.2). 9

5.1 Question. 9

5.2 Anwser 9

5.3 Explanation. 9

6 Item: 6 (Ref:1Z0-042.2.4.2). 10

6.1 Question. 10

6.2 Anwser 10

6.3 Explanation. 10

1.1 Question

You are in the process of installing Oracle Database variable ORACLE_HOME to /oracle/ora10g.

What does this value specify?

nmlkthe directory location for the base of OFA

nmlkthe directory location where the Oracle software is installed

nmlkthe directory location where the script files of the database are stored

nmlkthe directory location where the operating system searches for Oracle executables, such as SQL*Plus

1.2 Anwser

the directory location where the Oracle software is installed

1.3 Explanation

· The ORACLE_HOME environment variable specifies the directory location where the Oracle software is installed. In this scenario, /oracle/ora10g is the directory where the Oracle software is installed.

· The ORACLE_HOME environment variable does not specify the directory location for the base of OFA. The ORACLE_BASE environment variable specifies the directory location for the base of OFA.

· The ORACLE_HOME environment variable does not specify the directory where the script files of the database are stored. The % ORACLE_HOME%/rdbms/admin/ directory specifies the location where the script files of the database are stored.

· The ORACLE_HOME environment variable does not specify the directory location where the operating system searches for Oracle executables, such as SQL*Plus. The directory where the operating system searches for Oracle executables is specified using the PATH environment variable.

2.1 Question

Your Oracle server has four databases SALESDB, FINDB, HRDB, and TESTDB installed on it. The SALESDB database is

currently running. You set the ORACLE_SID environment variable on the operating system as follows:

C:\> SET ORACLE_SID=TESTDB

Then, you try to start the TESTDB database from another SQL*Plus prompt using this command:

SQL> STARTUP;

What will be the result of this command?

nmlkThe command will start the TESTDB database without any alterations to the SALESDB database.

nmlkThe command will start the TESTDB database and shut down the already-running SALESDB database.

nmlkThis command will return an Oracle already running, shut it down first error.

nmlkThe command will not start the TESTDB database because the assignment of the ORACLE_SID environment variable is

incorrect, and you must specify the specific parameter file when issuing the STARTUP command.

2.2 Anwser

The command will start the TESTDB database without any alterations to the SALESDB database.

2.3 Explanation

· The command will start the TESTDB database without any alterations to the SALESDB database. Setting the ORACLE_SID environment variable specifies the instance name for the database that is to be started. In this scenario, because there are four databases on the same server you should specify the instance name that is to be started using the ORACLE_SID environment variable. Setting this variable will not affect the instances that are already up and running.

· The option stating that the command will start the TESTDB database and shut down the already-running SALESDB instance is incorrect because the SALESDB instance will not shut down as a result of the STARTUP command.

· The option stating that the command will return an error is incorrect because the error is generated only if you have not specified the instance name by setting the ORACLE_SID environment variable. If this variable is specified, this error will not be generated.

· The option stating that the command will not start the TESTDB database because the assignment of the ORACLE_SID variable is incorrect because the variable is set appropriately in this scenario. Additionally, it is not necessary to specify the specific parameter file when issuing the STARTUP command. If no parameter file is specified, the default SPFILE will be used, if the SPFILE is not available, the default PFILE will be used to start the database.

3.1 Question

Which three statements correctly describe the Manage Templates option available with DBCA? (Choose three.)

gfedIt allows you to create a template based on another template.

gfedIt allows you to create a database with a structure similar to an existing database.

gfedIt allows you to create a template that contains only the data of an existing database.

gfedIt allows you to create a template based on all the tablespaces of an existing database.

gfedIt allows you to create a template that contains only the structure of an existing database.

gfedIt allows you to create a template that contains the data as well as structure of an existing database

3.2 Anwser

· It allows you to create a template based on another template.

· It allows you to create a template that contains only the structure of an existing database.

· It allows you to create a template that contains the data as well as structure of an existing database.

3.3 Explanation

The Manage Templates option available with DBCA allows you to:

  • Create a template based on another template.
  • Create a template that contains only the structure of the database.
  • Create a template that contains the data as well as structure of the database.

The Manage Templates option in DBCA is accessed on the Operations page in DBCA. DBCA

files, when creating databases. Each template can include database options, and other specifics

initialization parameters. Using these templates makes it easier to create and clone databases.

Using the Manage Templates option in DBCA, you cannot create a database. It can only existing template or an existing database. This template can then be used to create the database.

Using the Manage Templates option in DBCA, you cannot create a template that contains database. You can create a template that contains the data as well as structure of an existing

Using the Manage Templates option in DBCA, you cannot create a template based on all database. A template based on tablespaces in an existing database cannot be created.

4.1 Question

You are required to create a new database using DBCA. The database is required to store historical data pertaining to the last 20 years for a multinational bank. This data is generally accessed to create reports needed in different quarters of the year. In addition, new data is inserted into the database at the end of every month.

Considering these requirements, which template would be the best template to use to create the database?

nmlkData Warehouse

nmlkGeneral Purpose

nmlkCustom Database

nmlkTransaction Processing

4.2 Anwser

Data Warehouse

4.3 Explanation

· In this scenario, it would be best to use the Data Warehouse template because you have a large volume of historical data. This historical data is typically read-only and used generally to produce reports.

· The General Purpose template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template to create the database. The General Purpose template combines features of the Data Warehouse and Transaction Processing templates and is usually used when you are creating a database for which you are not sure of the kind of data that will be stored.

· The Custom Database template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template. The Custom Database template is generally used for scenarios that are more complex. Using the Custom Database template, you can explicitly define the options for the structure of the database that is to be created.

· The Transaction Processing template is generally used in situations where the number of concurrent users is higher and the data is accessed and updated more often, such as OLTP systems.

5.1 Question

You have installed Oracle Enterprise Manager 10g Database Control on your computer to manage your production database located on a remote host computer through a Web-enabled interface.

Which component is NOT a component of Oracle Enterprise Manager 10g Database Control?

nmlkOracle Management Agent

nmlkOracle Management Service

nmlkOracle Management Interface

nmlkOracle Management Repository

5.2 Anwser

Oracle Management Interface

5.3 Explanation

The Oracle Management Interface is not part of Oracle Enterprise Manager 10g Database Control. Oracle Enterprise Manager 10g Database Control consists of the following components:

· Oracle Management Agent

· Oracle Management Service

· Oracle Management Repository

All of the other options are incorrect because they are components of Oracle Enterprise Manager 10g Database Control.

6.1 Question

You are creating a database using DBCA. Which options can be configured when creating a database using DBCA? (Choose all that apply.)

gfedcharacter sets

gfedmemory sizing

gfeddatabase block size

gfedconnection mode

gfedmaximum number of users in the database

gfeddefault profile allocation for users

6.2 Anwser

character sets

memory sizing

database block size

connection mode

6.3 Explanation

When creating a database using DBCA, you can configure the character sets to be used by the database, the memory sizing option, the database block size, and the connection mode used to connect to the server. When you create a database using DBCA and select the Create Database link on the DBCA Operations page, a wizard is started. One of the steps of the wizard allows you to set initialization parameter values for these, as well as other parameters. The character sets are specific to the territory of the database and language used by the database. The memory sizing option includes the distribution of physical memory into different components of the Oracle database. The block-sizing option configures the database block size to be used by the Oracle database that is being created. The connection mode configures which connection mode, shared server or dedicated server, will be used to connect to the server. DBCA can also be used to schedule database backups on a regular basis.

You cannot configure the maximum number of users in the database when using DBCA to create a database. However, the maximum number of users in the database can be configured at the database level by using the initialization parameter

LICENSE_MAX_USERS.

You cannot configure the default profile allocation for users when using DBCA to create a database. This is configured at the database level when users are created or by using the DEFAULT profile.

7 Item: 7 (Ref:1Z0-042.1.1.1)

7.1 Question

You are in the process of installing Oracle 10g on your host computer. Click the Exhibit(s) button to view the hardware composition of your host computer.

Which action should you take to successfully complete the installation of Oracle 10g on your computer?

nmlkIncrease the RAM on your computer to 512 MB.

nmlkIncrease the virtual memory on your computer to 1 GB.

nmlkIncrease the free hard disk space on your computer to a minimum of 2 GB.

nmlkIncrease the free hard disk space on your computer to a minimum of 1.5 GB.

nmlkIncrease the temporary disk space on your computer to a minimum of 200 MB.

7.2 Anwser

Increase the free hard disk space on your computer to a minimum of 1.5 GB.

7.3 Explanation

The minimum free hard disk space required for a successful installation of Oracle 10g is 1.5 GB. You need to increase the free hard disk space on your computer to a minimum of 1.5 GB.

You do not need to increase the RAM on your computer to 512 MB because the minimum amount of physical memory or RAM required for a successful installation of Oracle 10g is 256 MB. However, Oracle recommends that you use 512 MB of RAM for an Oracle 10g database.

You do not need to increase the virtual memory on your computer because the minimum amount of virtual memory required for a successful installation of Oracle 10g is twice the amount of RAM, which is 512 MB in this scenario.

You do not need to increase the free hard disk space on your computer to a minimum of 2 GB because the minimum amount of hard disk space required for a successful installation of Oracle 10g is 1.5 GB.

You do not need to increase the temporary disk space on your computer to a minimum of 200 MB because the minimum amount of temporary disk space required for a successful installation of Oracle 10g is 100 MB.

8 Item: 8 (Ref:1Z0-042.1.3.2)

8.1 Question

Your Oracle server is running on a Linux platform. You create a new database NEWDB on this server.

Which file is updated with the Oracle system identifiers (SIDs) when this new Oracle database, NEWDB, is created?

nmlkoratab

nmlkcrontab

nmlkcatexp.sql

nmlkorainstRoot.sh

8.2 Anwser

oratab

8.3 Explanation

The oratab file on a Linux platform is updated with the Oracle SIDs when a new Oracle database is created on the Linux platform. The path where the oratab file is located is /var/opt.

The crontab file on a Linux platform is used for scheduling jobs. This file is not updated with the Oracle SIDs when a new Oracle database is created on the Linux platform.

The catexp.sql file is a script file that creates data dictionary views in an Oracle database to support import and export activities in the Oracle database. This file is not updated with the Oracle SIDs when a new Oracle database is created on the Linux platform.

The orainstRoot.sh script file creates the inventory pointer file while installing the Oracle 10g database on a Linux platform.

This file is not updated with the Oracle SIDs when a new Oracle database is created on the Linux platform.