Wednesday, July 11, 2007

thumbnail

1Z0-042: Creating Oracle Database 10g

OCP: Oracle certified Professional

1Z0-042: Creating Oracle Database 10g

1.1 Question

Your database is running in the shared server mode. You want to ensure that the memory allocated to the shared pool is completely used by the application users and not by RMAN processes or any other I/O server processes.

Which component of the Shared Global Area (SGA) should be allocated memory to achieve the objective?

· java pool

· log buffer

· large pool

· buffer cache

1.2 Anwser

large pool

1.3 Explanation

The large pool should be allocated memory to ensure that the memory allocated to the shared pool is completely used by the application users and not by RMAN processes or any other I/O server processes. Increasing the size of the shared pool, setting up a reserved area, and pinning PL/SQL packages are all effective methods of improving the performance of the shared pool.

However, the performance of the shared pool can be negatively impacted by SQL-intensive operations, such as those that occur when using multiple I/O server process and Oracle's Recovery Manager (RMAN) utility. The large pool can be configured manually to allocate memory to I/O server processes and RMAN processes. By doing this, the memory allocated to the shared pool will not be consumed by I/O server processes or RMAN processes. Instead, it will be available to the application processes.

Allocating memory to the java pool will not ensure that the memory allocated to the shared pool is completely used by application users and not by RMAN processes or any other I/O server processes. The java pool is a specific area in the SGA and is used to run Java-specific applications.

Allocating memory to the log buffer will not ensure that the memory allocated to the shared pool is completely used by the application users and not by the RMAN processes or any other I/O server processes. The memory allocated to the log buffer is not used by the RMAN processes. The larger the size of the redo log buffer, the less likely it is for the user server process to experience a wait when trying to place redo entries into the log buffer.

Allocating memory to the buffer cache will not ensure that the memory allocated to the shared pool is completely used by the application users and not by the RMAN processes or any other I/O server processes. The memory allocated to the buffer cache is not used by RMAN processes. The larger the size of the buffer cache, the less likely it is for cached buffers to be moved out

of the cache by the least recently used (LRU) list.

2.1 Question

Which background process and associated database component guarantees that committed data is saved even when the changes have not been recorded in the datafiles?

· CKPT and control file

· LGWR and online redo log files

· DBWn and archived redo log files

· DBWn and database buffer cache

2.2 Anwser

LGWR and online redo log files

2.3 Explanation

The log writer process, LGWR, and online redo log files guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The log writer process writes the blocks contained in the redo log buffer of the SGA to the online redo log files. The log writer process also writes the buffers to the online redo log files when a user transaction is committed. LGWR writes the committed data to the online redo log files, thus guaranteeing that the committed data is saved even if it has not been written to the datafiles.

The checkpoint process, CKPT, and control file do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. Checkpoints help to reduce the time required for instance recovery. A checkpoint is an event that signals DBWn to flush the modified data from the buffer cache to the disk, and CKPT updates the control file and datafiles. At checkpoints, the modified blocks from the database buffer cache are written to the datafiles by DBWn. The data blocks modified by a transaction will be written to the datafiles even if a transaction has not been committed by the user, and a checkpoint is initiated before the user commits the transaction. The control file is used to record structural changes in the database.

The database writer processes, DBWn, and archived redo log files do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The DBWn process writes the contents of the dirty buffers contained in the buffer cache to the datafiles. The archived redo log files are used for database recovery and are considered offline redo log files.

The database writer processes, DBWn, and database buffer cache do not guarantee that committed data is saved even when the changes have not been recorded in the datafiles. The DBWn process writes the contents of the dirty buffers contained in the buffer cache to the datafiles. The database buffer cache is the area of memory that caches the database data, containing blocks from the datafiles that have been read recently.

3.1 Question

Which three statements correctly describe the Manage Templates option available with DBCA? (Choose three.)

· It allows you to create a template based on another template.

· It allows you to create a database with a structure similar to an existing database.

· It allows you to create a template that contains only the data of an existing database.

· It allows you to create a template based on all the tablespaces of an existing database.

· It allows you to create a template that contains only the structure of an existing database.

· It allows you to create a template that contains the data as well as structure of an existing database.

3.2 Anwser

· It allows you to create a template based on another template.

· It allows you to create a template that contains only the structure of an existing database.

· It allows you to create a template that contains the data as well as structure of an existing database.

3.3 Explanation

The Manage Templates option available with DBCA allows you to:

· Create a template based on another template.

· Create a template that contains only the structure of the database.

· Create a template that contains the data as well as structure of the database.

The Manage Templates option in DBCA is accessed on the Operations page in DBCA. DBCA uses templates, which are XML files, when creating databases. Each template can include database options, and other specifics of the database, such as initialization parameters. Using these templates makes it easier to create and clone databases.

Using the Manage Templates option in DBCA, you cannot create a database. It can only be used to create a template from an existing template or an existing database. This template can then be used to create the database.

Using the Manage Templates option in DBCA, you cannot create a template that contains only the data of an existing database. You can create a template that contains the data as well as structure of an existing database.

Using the Manage Templates option in DBCA, you cannot create a template based on all the tablespaces of an existing database. A template based on tablespaces in an existing database cannot be created.

4.1 Question

You are required to create a new database using DBCA. The database is required to store historical data pertaining to the last 20 years for a multinational bank. This data is generally accessed to create reports needed in different quarters of the year. In addition, new data is inserted into the database at the end of every month.

Considering these requirements, which template would be the best template to use to create the database?

· Data Warehouse

· General Purpose

· Custom Database

· Transaction Processing

4.2 Anwser

Data Warehouse

4.3 Explanation

In this scenario, it would be best to use the Data Warehouse template because you have a large volume of historical data. This historical data is typically read-only and used generally to produce reports.

The General Purpose template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template to create the database. The General Purpose template combines features of the Data Warehouse and Transaction Processing templates and is usually used when you are creating a database for which you are not sure of the kind of data that will be stored.

The Custom Database template can be used to create this database, but because the data specifically matches the requirements of a data warehouse, you should use the Data Warehouse template. The Custom Database template is generally used for scenarios that are more complex. Using the Custom Database template, you can explicitly define the options for the structure of the database that is to be created.

The Transaction Processing template is generally used in situations where the number of concurrent users is higher and the data is accessed and updated more often, such as OLTP systems

5.1 Question

You have installed Oracle Enterprise Manager 10g Database Control on your computer to manage your production database located on a remote host computer through a Web-enabled interface.

Which component is NOT a component of Oracle Enterprise Manager 10g Database Control?

· Oracle Management Agent

· Oracle Management Service

· Oracle Management Interface

· Oracle Management Repository

5.2 Anwser

Oracle Management Interface

5.3 Explanation

The Oracle Management Interface is not part of Oracle Enterprise Manager 10g Database Control. Oracle Enterprise Manager 10g Database Control consists of the following components:

· Oracle Management Agent

· Oracle Management Service

· Oracle Management Repository

All of the other options are incorrect because they are components of Oracle Enterprise Manager 10g Database Control.

6.1 Question

You are creating a database using DBCA. Which options can be configured when creating a database using DBCA? (Choose all that apply.)

· character sets

· memory sizing

· database block size

· connection mode

· maximum number of users in the database

· default profile allocation for users

6.2 Anwser

character sets

memory sizing

database block size

connection mode

6.3 Explanation

When creating a database using DBCA, you can configure the character sets to be used by the database, the memory sizing option, the database block size, and the connection mode used to connect to the server. When you create a database using DBCA and select the Create Database link on the DBCA Operations page, a wizard is started. One of the steps of the wizard allows you to set initialization parameter values for these, as well as other parameters. The character sets are specific to the territory of the database and language used by the database. The memory sizing option includes the distribution of physical memory into different components of the Oracle database. The block-sizing option configures the database block size to be used by the Oracle database that is being created. The connection mode configures which connection mode, shared server or dedicated server, will be used to connect to the server. DBCA can also be used to schedule database backups on a regular basis.

You cannot configure the maximum number of users in the database when using DBCA to create a database. However, the maximum number of users in the database can be configured at the database level by using the initialization parameter

LICENSE_MAX_USERS.

You cannot configure the default profile allocation for users when using DBCA to create a database. This is configured at the database level when users are created or by using the DEFAULT profile.

7.1 Question

Your database server is running in shared server mode. Which component is a component of the Program Global Area (PGA) when the database is in shared server mode?

· shared pool

· stack space

· user session data

· memory structures

· cursor state information

7.2 Anwser

stack space

7.3 Explanation

In shared server mode, only stack space is a component of the PGA. The other components that are the user session data, cursor state information, shared pool, and memory structures are all part of the System Global Area (SGA). When the database server is running in the shared server mode, the PGA is not a part of the SGA, and the PGA only contains stack space. The other components like user session data, memory structures, and cursor state information are part of the SGA.

The shared pool is not a part of the PGA in shared server mode. It is a part of the SGA.

The user session data is not part of the PGA. It is a part of the SGA.

The memory structures are not a part of the PGA, but rather part of the SGA.

The cursor state information is stored in the SGA when the database is in shared server mode.

8.1 Question

You are using an spfile to start the database. The maximum number of users in your database has been set to 150. Because the number of users has already reached the maximum limit, you are not able to create more users.

Which statement should you issue to increase the maximum number of users in this database and keep the change persistent without affecting users who are connected to the database?

· ALTER SYSTEM SET LICENSE_MAX_USERS=200;

· ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200;

· ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=SPFILE;

· ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=MEMORY;

· ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200 SCOPE=SPFILE;

8.2 Anwser

ALTER SYSTEM SET LICENSE_MAX_USERS=200;

8.3 Explanation

You should use the ALTER SYSTEM SET LICENSE_MAX_USERS=200; statement. To make the changes immediate and persistent across an instance that is using an spfile, you should use the clause SCOPE=BOTH or accept the default, which is SCOPE=BOTH, if you are using an spfile to start the instance. The maximum number of users in a database is set using the initialization parameter LICENSE_MAX_USERS.

You should not use the ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200; statement because the parameter LICENSE_MAX_SESSIONS does not increase the number of users that can exist in a database. The LICENSE_MAX_SESSIONS parameter specifies the maximum number of user sessions that can be created in a database at a time.

You should not use the ALTER SYSTEM SET LICENSE_MAX_USERS=200 COPE=SPFILE; statement because this will modify the parameter only in the spfile. The modification will come into effect only at the next startup.

You should not use the ALTER SYSTEM SET LICENSE_MAX_USERS=200 SCOPE=MEMORY; statement because this will modify the parameter only in memory. The change will not be persistent at the next startup.

You should not use the ALTER SYSTEM SET LICENSE_MAX_SESSIONS=200 SCOPE=SPFILE; statement because the parameter LICENSE_MAX_SESSIONS does not specify the number of users that can exist in a database. The LICENSE_MAX_SESSIONS parameter specifies the maximum number of user sessions that can be created in a database at a

time.

9.1 Question

Your database is running in the ARCHIVELOG mode. After an instance crash, you start up the database by issuing the STARTUP command. Instance recovery is initiated by the SMON background process.

Which statements are true about the phases involved in the instance recovery performed by SMON? (Choose all that apply.)

· The instance recovery will recover data up to the last commit.

· The information used for instance recovery will be derived from the alert log.

· The information used for instance recovery will be derived from the control file.

· The uncommitted changes are rolled back using information in the undo segments.

· The information used for instance recovery will be derived from the data dictionary.

· The instance recovery will recover all the data entered into the database before the point of the crash.

· Only the committed changes stored in the online redo log file are applied to the affected data blocks.

· The committed and uncommitted changes stored in the online redo log are applied to the affected data blocks.

9.2 Anwser

· The instance recovery will recover data up to the last commit.

· The information used for instance recovery will be derived from the control file.

· The uncommitted changes are rolled back using information in the undo segments.

· The committed and uncommitted changes stored in the online redo log are applied to the affected data blocks.

9.3 Explanation

· The instance recovery performed by SMON goes through the following phases:

· All the committed and uncommitted data from the online redo log is applied to the affected blocks.

· The uncommitted data applied to the data blocks is rolled back using the undo information in the undo segments.

· Information, such as the online redo log sequence number, which is to be used for instance recovery is obtained from the control file.

· The instance recovery thus performed will recover the database only up to the last commit. Any uncommitted data will not be applied to the database.

· The information used for instance recovery is not derived from the alert log or the data dictionary. Instead, it is derived from the control file.

· The instance recovery will not recover the uncommitted data. It will recover data only up to the last commit.

· All the data from the redo log file, whether committed or uncommitted, will be applied to the database. The uncommitted data will be rolled back by applying undo data from the undo segments.

10.1 Question

During your routine monitoring operations on the database, you observe that there is heavy contention on data dictionary tables whenever extents are allocated or freed from a tablespace.

Which action should you take as a measure to eliminate this contention on the SYSTEM tablespace?

· Use local extent management for the SYSTEM tablespace.

· Use bigfile tablespaces in the database to avoid contention.

· Use automatic segment space management for tablespaces.

· Use local extent management for nonsystem tablespaces if not using it already.

10.2 Anwser

Use local extent management for nonsystem tablespaces if not using it already.

10.3 Explanation

To reduce contention caused on the data dictionary tables by allocation or freeing of extents, you can use local extent management for nonsystem tablespaces. If you use locally-managed tablespaces, the contention on data dictionary tables is reduced. This is because locally-managed tablespaces do not record free space or extent allocation in the data dictionary.

Using local extent management for the SYSTEM tablespace will not eliminate the contention because the contention is mainly caused by allocation and deallocation of space in the nonsystem tablespace.

Using bigfile tablespaces will not eliminate the contention on data dictionary tables. A bigfile tablespace is mainly used to store a large volume of data thereby avoiding the requirement to create several smallfile tablespaces to store the same data.

Using automatic segment space management will not eliminate contention because automatic segment space management is used to automatically manage values for storage parameters, such as PCTUSED, PCTFREE, and PCTINCREASE, for different database objects. When using automatic segment space management, the free space within the segment and used space is

tracked using bitmaps and not by using free lists. Using this technique will not reduce contention on data dictionary tables.

11.1 Question

Which stages of Structured Query Language (SQL) statement processing are performed when you issue a MERGE statement? (Choose all that apply.)

· The rows inserted are written to the datafiles.

· The data is fetched and returned to the users.

· The updated blocks are written to the datafiles.

· The changes made are recorded in the redo log files.

· The System Change Number is recorded in the control file.

· A parse tree is generated for the query if it does not already exist.

· Blocks of data are retrieved into the database buffer cache if they are not already present.

11.2 Anwser

A parse tree is generated for the query if it does not already exist.

Blocks of data are retrieved into the database buffer cache if they are not already present.

11.3 Explanation

When a MERGE statement is issued at the SQL prompt, it undergoes the following stages:

· The PARSE stage in which the query is parsed to check for details, such as the syntax of the statement, privileges on the object, and optimal execution plan.

· The BIND stage in which any variables are assigned values.

· The EXECUTE stage in which the query is actually executed. At this stage, the data required by the user's query isretrieved from the database buffer cache and returned to the user.

The option that states that the inserted rows are written to the datafiles is incorrect because the data is written to datafiles only at checkpoints.

The option that states that the data is fetched and returned to the user is incorrect because the MERGE statement does not undergo a FETCH stage. The FETCH stage is only applicable to the SELECT statement.

The option that states that the updated blocks are written to the datafiles is incorrect because the updated data is written to datafiles only at checkpoints.

The option that states that the changes are recorded in the redo log files is incorrect because the changes are written to the redo log files only when a commit is performed.

The option that states that the System Change Number is recorded in the control file is incorrect because the System Change Number (SCN) is recorded in the control file only on a commit.

12.1 Question

You are a DBA with Startel. You have a set of four databases SALESDB, FINDB, HRDB, and PROD. You also have two listeners, L1 and L2, and an application server, APPS1, located on different locations globally. You have been assigned the task of managing all these components from your office located in New York.

Which two tools should you install to administer these components centrally on the network? (Choose two.)

· Oracle Management Agent

· Oracle Enterprise Manager 10g

· Oracle Enterprise Manager 10g Grid Control Console

· Oracle Enterprise Manager 10g Application Server Control

· Oracle Enterprise Manager 10g Database Control Console

12.2 Anwser

Oracle Management Agent

Oracle Enterprise Manager 10g Grid Control Console

12.3 Explanation

To administer the databases, listeners, and Web application servers centrally, you need the Oracle Management Agent installed on all the host computers and the Oracle Enterprise Manager 10g Grid Control Console installed on the centralized computer, which you will use to administer the components located globally. The Oracle Management Agent monitors all targets on the host and sends the monitoring information to the middle-tier Management Service, and manages and maintains the host and its targets. The Oracle Enterprise Manager 10g Grid Control Console is used to administer the components located globally.

Oracle Enterprise Manager 10g does not provide any help in centrally administering databases located globally. Oracle Enterprise Manager 10g is used to monitor and manage your database.

Oracle Enterprise Manager 10g Application Server Control is not used for managing databases globally. It is used to manage Oracle Application Server 10g.

Oracle Enterprise Manager 10g Database Control Console is not used to manage databases globally. It can only be used to monitor and manage a single Oracle10g instance at a time.

13.1 Question

Users in your application complain of slow response to queries. Upon analysis, you realize that the queries are being reparsed because they are aged out quickly.

Which component of the Shared Global Area (SGA) should you modify to avoid the reparsing of queries?

· java pool

· large pool

· library cache

· redo log buffer

· data dictionary cache

· database buffer cache

13.2 Anwser

library cache

13.3 Explanation

Increasing the library cache component of the SGA will help avoid the reparsing of queries because the library cache contains the parse tree information of queries. The queries are reparsed because they are aged out too quickly from the library cache due to lack of space in the library cache. Adding more space to the library cache will ensure that the parse tree information for the queries is not aged out. Because space cannot be directly allocated to the library cache, the shared pool component of the SGA can be increased, thereby increasing the size of library cache, which is contained in the shared pool.

The java pool is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. The java pool is a specific area in the SGA that is used to run Java-specific applications.

To run these Java applications, you use the Java stored procedures in the database. The size of the java pool is determined by the JAVA_POOL_SIZE parameter specified in the initialization parameter file.

The large pool is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. A large pool, when configured, is used to store the session memory information for shared servers and is also used for some RMAN operations.

The redo log buffer is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help to avoid the reparsing of queries. The redo log buffer is used to record all the changes made to the data blocks in a database. The redo log buffer is used in recovery.

The data dictionary cache is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help avoid the reparsing of queries. The data dictionary cache is a part of the shared pool, which stores the definitions of recently accessed data objects, such as tables and indexes.

The database buffer cache is an incorrect option because it does not store the parse tree information for the queries, and modifying it will not help avoid the reparsing of queries. The database buffer cache is used to cache the recently accessed data blocks that have been retrieved from the datafiles as a result of user queries on the database.

14.1 Question

You need to create a database similar to an existing database. The data and structure of the new database must be the same as the existing database.

Which action would best obtain the desired results?

· Create a database using the CREATE DATABASE statement. Then, perform a complete database import of the original database.

· Create a template of the existing database using DBCA that incorporates only the structure of the original database. Then, use this template to create the new database.

· Create a template of the existing database using DBCA that incorporates the structure and data of the original database.Then, use this template to create the new database.

· Create a template of the existing database using DBCA that incorporates only the structure of the original database and use this template to create the new database. Then, perform a complete database import of the original database.

14.2 Anwser

Create a template of the existing database using DBCA that incorporates the structure and data of the original database. Then, use this template to create the new database.

14.3 Explanation

To create a database similar to an existing database with the same structure and data, use DBCA to create a template with the same structure as well as data from the existing database. Then, use this template to create the new database. When you create a template from an existing database using the structure as well as data, this template will contain all the physical and logical characteristics of the source database. The datafiles, log files, control files, and archive redo logs will all be the same in number and size. The other structures in the database, such as the tablespaces, tables, and user schemas, will be identical to those in the source database.

Using the CREATE DATABASE statement would require more administrative effort. You would need to specify all of the needed options to create the new database with the same options as the original database. Manually creating the database and then performing an import would require more effort than using DBCA to create the database from a template.

Using DBCA to create a template with only the same structure and using this template to create the new database would create the database, but not with the same data as the original database. You would need to rebuild the data in the database.

Using DBCA to create a template with only the same structure and using this template to create the new database followed by a complete database import is not the best choice. This will increase the time needed to create an identical database because you will need to perform a complete database import after the database creation.

15.1 Question

Eric is working as a database administrator with Telstar Corporation. He has been granting SYSDBA and SYSOPER privileges to certain users who must perform certain administrative tasks, such as starting and shutting down the database remotely.

Which file determines the number of users that can be granted the SYSDBA or SYSOPER privilege?

· trace file

· control file

· alert log file

· password file

· system datafile

15.2 Anwser

password file

15.3 Explanation

The password file contains information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege.

While creating a password file, you can use the ENTRIES clause to specify the number of users who can be granted the SYSDBA or SYSOPER privilege. The password file can also be used to authenticate users who are trying to access the database remotely. The REMOTE_LOGIN_PASSWORDFILE initialization parameter can be set to provide a different authentication mechanism for the database. Setting the parameter to a value of NONE specifies that users should be authenticated by the operating system. Setting the parameter to a value of SHARED specifies that more than one database can use the same password file. Using this method, the user can be authenticated on one or more databases that use the same password file.

Setting the parameter to a value of EXCLUSIVE specifies that users are using the password file for authentication.

The trace file option is incorrect because a trace file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The trace files in an Oracle database are created for every server process and background process. Whenever the server or background process encounters an error or is terminated, the information regarding the error is written to the respective trace file. The trace file can be used to gather information regarding these server and background processes.

The control file option is incorrect because a control file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. A control file in an Oracle database contains information about the physical structure of the database. The control file is required whenever you start the database and must be available throughout the life of the instance.

The alert log file option is incorrect because the alert log file does not contain information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The alert log file in an Oracle database contains database-wide information about the errors and other important events in the database. The alert log file contains the startup and shutdown times of the database, the default parameters used to start the database, and other important information regarding the database.

The system datafile option is incorrect because a system datafile file cannot be used to determine information regarding the number of users who can be granted the SYSDBA or SYSOPER privilege. The system datafile in an Oracle database is a part of the SYSTEM tablespace. The SYSTEM tablespace is created in an Oracle database automatically upon database creation and constitutes the data dictionary.

16.1 Question

You have created a database using DBCA. Now, you want to change the configurations of the database using DBCA.Which option CANNOT be configured using DBCA?

· configure the database to enable daily backup

· configure the database to run as a shared server

· configure the database to use the Local naming method

· configure the database to use password file authentication

· configure the database to use Automatic Storage Management

16.2 Anwser

configure the database to use the Local naming method

16.3 Explanation

Using DBCA, you cannot configure the naming method for resolving a net service name. The naming method for the database can be configured using the net configuration files or the Net Configuration Assistant.

DBCA can be used to configure the database to enable daily backup. To enable daily backup of the database, you must check the Enable Daily Backup check box on the Management Options page. While enabling backup, you can specify the time when the daily backup will be performed and the username and password of the user performing the backup.

DBCA can be used to configure the database to run as a shared server. To configure the database to run as a shared server, you must select the appropriate connection mode in the Connection Mode tab on the Initialization Parameters page.

DBCA can be used to configure the database to use password file authentication by setting the appropriate parameters. You can set the REMOTE_LOGIN_PASSWORDFILE parameter to EXCLUSIVE or SHARED using the Initialization Parameters page to enable the database to use password file authentication.

DBCA can be used to configure the database to use Automatic Storage Management. To configure the database to use Automatic Storage Management, you must select the Automatic Storage Management (ASM) button on the Passwords and Storage page.

17.1 Question

Which activities in the database signal DBWn to write the data from the database buffer cache to datafiles? (Choose all that apply.)

· whenever there is a log switch

· whenever a user transaction is committed

· whenever the database is shut down cleanly

· whenever a table is dropped from the database

· whenever a tablespace in the database is taken offline

· whenever the server process cannot find free buffers in the database buffer cache

17.2 Anwser

whenever there is a log switch

whenever the database is shut down cleanly

whenever a tablespace in the database is taken offline

whenever the server process cannot find free buffers in the database buffer cache

17.3 Explanation

DBWn writes data to datafiles whenever there is a log switch, whenever the database is shut down cleanly, whenever a tablespace in the database is taken offline, and whenever the server process cannot find free buffers in the database buffer cache. DBWn writes data to datafiles any time a checkpoint occurs. A checkpoint occurs when you shut down the database cleanly, whenever there is a log switch and whenever a checkpoint is forced by the DBA using the ALTER SYSTEM CHECKPOINT statement. Also, an incremental checkpoint occurs when you take a tablespace in the database offline. In addition, when the server process scans the database buffer cache and is unable to find any free buffers, it signals DBWn to write the data from the database buffer cache to the datafiles.

DBWn does not write data to the datafiles when a user transaction is committed. Modified data blocks are written to the redo log buffer only on a commit.

DBWn does not write data to the datafiles when a table is dropped from the database. When dropping a table from the database, only the redo is written to the redo log buffer.

Subscribe by Email

Follow Updates Articles from This Blog via Email

24 Comments

avatar

[url=http://www.ile-maurice.com/forum/members/wetter-vorhersage.html][b]wetter schottland[/b][/url]

[url=http://www.ile-maurice.com/forum/members/wetter-vorhersage.html][b]wetter 3[b][/url]

Reply Delete
avatar

Quite right! I think, what is it good thought. And it has a right to a life.

Reply Delete
avatar

generic viagra viagra buy philippines - buy viagra on online

Reply Delete
avatar

tramadol no prescription tramadol 50 mg headache - tramadol vicodin

Reply Delete
avatar

buy tramadol online tramadol dosage sizes - buy tramadol an 627

Reply Delete
avatar

cheapest soma soma generic 2410 - soma bras made

Reply Delete
avatar

soma cheap high off soma pills - buy cheap soma online no prescription

Reply Delete
avatar

buy soma online watson soma online - soma online fedex

Reply Delete
avatar

buy tramadol online can i take 100 mg of tramadol - order tramadol online cod overnight

Reply Delete
avatar

order cialis online no prescription buy cialis 40mg - buy cialis online in malaysia

Reply Delete
avatar

buy tramadol online reviews tramadol ultram addiction - is it illegal to buy tramadol online

Reply Delete
avatar

buy tramadol with cod tramadol zofran interaction - tramadol for dogs and humans

Reply Delete
avatar

generic xanax employment drug test for xanax - buy brand xanax online no prescription

Reply Delete
avatar

xanax online xanax 2mg overdose - buy xanax online with no prescription needed

Reply Delete
avatar

can you buy xanax online legally xanax for social anxiety reviews - xanax withdrawal with klonopin

Reply Delete
avatar

cialis online cialis daily long term side effects - cialis 0 2

Reply Delete
avatar

buy tramadol what is a normal dosage of tramadol for dogs - what is tramadol 50 mg street value

Reply Delete
avatar

http://landvoicelearning.com/#38471 tramadol withdrawal neonatal - tramadol dosage in humans

Reply Delete
avatar

buy tramadol taking 4 50mg tramadol - buy tramadol online no prescription

Reply Delete
avatar

buy tramadol hcl buy tramadol online without rx - cheap tramadol on line

Reply Delete
avatar

http://www.integrativeonc.org/adminsio/buyklonopinonline/#8491 klonopin dosage dog - cheap klonopin online no prescription

Reply Delete
avatar

I am genuіnely thankful to thе owner of
thiѕ ωeb pаge who hаs shaгed this impreѕsive
ρаragгaph аt at thіѕ time.


Mу web ρage - link building service

Reply Delete
avatar

http://southcarolinaaccidentattorney.com/#50476 carisoprodol 350 mg pic - dosage for carisoprodol

Reply Delete