Friday, July 18, 2008
oracle backup concept
Thursday, July 17, 2008
tablespace hot backup consistency
After you performed your online backup, did you make sure you archived the
current online redo log and copied it across as well ?
You'll need all the redo logs generated during the backup to ensure that the
database can resync itself. And that'll hence include the current online
redo as well.
redo undo and data consistency
=========================
When modify a table with normal insert/update/delete, then
1) redo for corresponding UNDO block change for this update is placed in redolog buffer
2) corresponding UNDO block is changed
3) redo for DATA block change for this update is placed in redolog buffer
4) DATA block itself is changed
Note that during buffer cache block changes we record the RBA (redo byte address in redo stream) of last change in buffer headers (can be seen from X$BH). RBA is an address in redo stream, it can uniquely identify a redo record/change vector in redologs.
As long as we don't commit (with sync option) here, no redo has to be written onto disk.
1) If instance crashes now, we don't have neither the redo nor datablock changes persisting, thus no inconsistencies
2) If instance crashes after LGWR has written the redo to disk, but DBWR hasn't written the datablocks, then during instance recovery we:
a) read old versions of datablocks from disk
b) apply redo to the old versions - note that Oracle guarantees every change vector between old version on disk and current version until commit to be in online redologs - e.g. it doesn't allow any redolog to be overwritten before all corresponding datablocks are checkpointed to disk.
c) this above is called cache recover - we make sure that all datafiles are physically consistent. After this point if we still have inconsistent/fractured blocks, we have encountered a hardware fault or software bug
d) database can be opened now, if any server processes see uncommitted datablocks, they can roll these back on their own - leaving the blocks they don't need for SMON (this is called fast start rollback if memory serves correctly)
e) SMON goes through undo segments to see whether there are uncommitted transactions (the committed transactions have a commit record at their end in redologs) and rolls those back (or hands those over to parallel slaves to roll back depending on transaction sizes)
3) It is not possible to have a situation where DBWR writes modified buffers back to disk before their redo is written. This is called write-ahead logging protocol and is mentioned in documentation. It is achieved by marking RBA of latest change to buffer header (as mentioned above) and DBWR checks whether redo up to max(RBA) in write block batch is already on disk (from memory structure underlying X$KCCCP).
If it's not, DBWR will signal LGWR and won't write those buffers to disk until LGWR acknowledges that redo is on disk.
So, as long as the foundations on what Oracle relies (OS, hardware) works properly and we don't hit any bugs (and unless I'm wrong) Oracle ALWAYS guarantees the blocks to be consistent after cache recovery in case of instance failure. It doesn't matter whether only half of DBWR's block write succeeded, or 99% or 1%, Oracle's recovery mechanisms are conceptually reliable.
All of above was a simplification, especially from 10g where we can have some redo log buffers in shared pool (private strands), some undo data cached in shared pool (in-memory undo), out-of order physical redo (multiple redolog strands and redo allocation latches), but again, I'm sure Oracle has based their recovery mechanisms on solid theoretical foundations here - no vulnerabilities on race conditions nor dependencies on which order you execute your stuff.
Tanel.
Sunday, July 13, 2008
get oracle DDL
select DBMS_METADATA.GET_DDL('TABLESPACE','SYSTEM') from dual;
oracle automatic startup and shutdown
install Oracle 11g on RHEL4 and RHEL5
oracle temporary tablespace
create oracle database
USER SYS IDENTIFIED BY sys_password
USER SYSTEM IDENTIFIED BY system_password
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/u01/app/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/u01/app/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/app/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
SYSAUX DATAFILE '/u01/app/oracle/oradata/mynewdb/sysaux01.dbf' SIZE 325M REUSE
DEFAULT TABLESPACE users
DATAFILE '/u01/app/oracle/oradata/mynewdb/users01.dbf'
SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/app/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/u01/app/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
Step 10: Create Additional Tablespaces
To make the database functional, you need to create additional tablespaces for your application data. The following sample script creates some additional tablespaces:
CREATE TABLESPACE apps_tbs LOGGING
DATAFILE '/u01/app/oracle/oradata/mynewdb/apps01.dbf'
SIZE 500M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace (optional)
CREATE TABLESPACE indx_tbs LOGGING
DATAFILE '/u01/app/oracle/oradata/mynewdb/indx01.dbf'
SIZE 100M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
For information about creating tablespaces, see Chapter 12, "Managing Tablespaces".
Step 11: Run Scripts to Build Data Dictionary Views
Run the scripts necessary to build data dictionary views, synonyms, and PL/SQL packages, and to support proper functioning of SQL*Plus:
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/sqlplus/admin/pupbld.sql
EXIT
The at-sign (@) is shorthand for the command that runs a SQL*Plus script. The question mark (?) is a SQL*Plus variable indicating the Oracle home directory. The following table contains descriptions of the scripts:
Script Description
CATALOG.SQL Creates the views of the data dictionary tables, the dynamic performance views, and public synonyms for many of the views. Grants PUBLIC access to the synonyms.
CATPROC.SQL Runs all scripts required for or used with PL/SQL.
PUPBLD.SQL Required for SQL*Plus. Enables SQL*Plus to disable commands by user.
Oracle tempfile creation if the current one is full
TEMPORARY TABLESPACE "TEMP1" TEMPFILE
'E:\ORACLE_DB_MAIN\ORADATA\MAIN\TEMP02.DBF' SIZE 200M
REUSE AUTOEXTEND
ON NEXT 640K MAXSIZE 32767M EXTENT MANAGEMENT LOCAL UNIFORM
SIZE 1024K;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP1";
DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;
oracle backup and recovery
===================
Opening or Bringing the database in Archivelog mode.
To open the database in Archive log mode. Follow these steps:
STEP 1: Shutdown the database if it is running.
STEP 2: Take a full offline backup.
STEP 3: Set the following parameters in parameter file.
LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc
LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”
If you want you can specify second destination also
LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc1”
Step 3: Start and mount the database.
SQL> STARTUP MOUNT
STEP 4: Give the following command
SQL> ALTER DATABASE ARCHIVELOG;
STEP 5: Then type the following to confirm.
SQL> ARCHIVE LOG LIST;
STEP 6: Now open the database
SQL>alter database open;
Step 7: It is recommended that you take a full backup after you brought the database in archive log mode.
To again bring back the database in NOARCHIVELOG mode. Follow these steps:
STEP 1: Shutdown the database if it is running.
STEP 2: Comment the following parameters in parameter file by putting " # " .
# LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”
# LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc2”
# LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc
STEP 3: Startup and mount the database.
SQL> STARTUP MOUNT;
STEP 4: Give the following Commands
SQL> ALTER DATABASE NOARCHIVELOG;
STEP 5: Shutdown the database and take full offline backup.
TAKING OFFLINE BACKUPS. ( UNIX )
Shutdown the database if it is running. Then start SQL Plus and connect as SYSDBA.
$sqlplus
SQL> connect / as sysdba
SQL> Shutdown immediate
SQL> Exit
After Shutting down the database. Copy all the datafiles, logfiles, controlfiles, parameter file and password file to your backup destination.
TIP:
To identify the datafiles, Logfiles query the data dictionary tables V$DATAFILE and V$LOGFILE before shutting down.
Lets suppose all the files are in "/u01/ica" directory. Then the following command copies all the files to the backup destination /u02/backup.
$cd /u01/ica
$cp * /u02/backup/
Be sure to remember the destination of each file. This will be useful when restoring from this backup. You can create text file and put the destinations of each file for future use. Now you can open the database.
TAKING ONLINE (HOT) BACKUPS.(UNIX)
To take online backups the database should be running in Archivelog mode. To check whether the database is running in Archivelog mode or Noarchivelog mode. Start sqlplus and then connect as SYSDBA.
After connecting give the command "archive log list" this will show you the status of archiving.
$sqlplus
Enter User:/ as sysdba
SQL> ARCHIVE LOG LIST
If the database is running in archive log mode then you can take online backups.
Let us suppose we want to take online backup of "USERS" tablespace. You can query the V$DATAFILE view to find out the name of datafiles associated with this tablespace. Lets suppose the file is
"/u01/ica/usr1.dbf ".
Give the following series of commands to take online backup of USERS tablespace.
$sqlplus
Enter User:/ as sysdba
SQL> alter tablespace users begin backup;
SQL> host cp /u01/ica/usr1.dbf /u02/backup
SQL> alter tablespace users end backup;
SQL> exit;
RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE.
Option 1: When you don’t have a backup.
If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then, you can drop the damaged datafile and open the database. You will loose all information contained in the damaged datafile.
The following are the steps to drop a damaged datafile and open the database.
(UNIX)
STEP 1: First take full backup of database for safety.
STEP 2: Start the sqlplus and give the following commands.
$sqlplus
Enter User:/ as sysdba
SQL> STARTUP MOUNT
SQL> ALTER DATABASE DATAFILE '/u01/ica/usr1.dbf ' offline drop;
SQL>alter database open;
Option 2:When you have the Backup.
If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options for you.
i . Either you can drop the damaged datafile, if it does not contain important information which you can
afford to loose.
ii . Or you can restore from full backup. You will loose all the changes made to the database since last full
backup.
To drop the damaged datafile follow the steps shown previously.
To restore from full database backup. Do the following.
STEP 1: Take a full backup of current database.
STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations.
(UNIX)
Suppose the backup is in "/u2/oracle/backup" directory. Then do the following.
$cp /u02/backup/* /u01/ica
This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations.
RECOVERING FROM LOST OF CONTROL FILE.
If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location to the damaged location and open the database
If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can recreate a control file.
If you have already taken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since then, just create the controlfile by executing the statement
Buf If you have added any new tablespace after generating create controlfile statement. Then you have to alter the script and include the filename and size of the file in script file.
If your script file containing the control file creation statement is "CR.SQL"
Then just do the following.
STEP 1: Start sqlplus
STEP 2: connect / as sysdba
STEP 3: Start and do not mount a database like this.
SQL> STARTUP NOMOUNT
STEP 4: Run the "CR.SQL" script file.
STEP 5: Mount and Open the database.
SQL>alter database mount;
SQL>alter database open;
If you do not have a backup of Control file creation statement. Then you have to manually give the CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You will lose any datafiles which you do not include.
Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.
Recovering Database when the database is running in ARCHIVELOG Mode.
Recovering from the lost of Damaged Datafile.
If you have lost one datafile. Then follow the steps shown below.
STEP 1. Shutdown the Database if it is running.
STEP 2. Restore the datafile from most recent backup.
STEP 3. Then Start sqlplus and connect as SYSDBA.
$sqlplus
Enter User:/ as sysdba
SQL>Startup mount;
SQL>Set autorecovery on;
SQL>alter database recover;
If all archive log files are available then recovery should go on smoothly. After you get the "Media Recovery Completely" statement. Go on to next step.
STEP 4. Now open the database
SQL>alter database open;
Recovering from the Lost Archived Files:
If you have lost the archived files. Then Immediately shutdown the database and take a full offline backup.
Time Based Recovery (INCOMPLETE RECOVERY).
Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table.
You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14-Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday.
Now to recover the table follow these steps.
STEP 1. Shutdown the database and take a full offline backup.
STEP 2. Restore all the datafiles, logfiles and control file from the full offline backup which was taken on Monday.
STEP 3. Start SQLPLUS and start and mount the database.
STEP 4. Then give the following command to recover database until specified time.
SQL> recover database until time '2007:08:16:13:55:00'
using backup controlfile;
STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this
SQL> alter database open resetlogs;
STEP 6. After database is open. Export the table to a dump file using Export Utility.
STEP 7. Restore from the full database backup which you have taken on Saturday.
STEP 8. Open the database and Import the table.
Friday, July 4, 2008
how to change sysman and dbsnmp password for Oracle 11g under Linux
- emctl stop dbconsole
- emctl status dbconsole and emctl status agent to make sure it's stopped
- ORACLE_SID=orcl
- sqlplus / as sysdba
- alter user sysman identified by sysman (change new password to 'sysman')
- conn sysman/sysman
- alter user sysman account unlock (if above command shows 'locked')
- cd $ORACLE_HOME/hostname_sid/sysman/config
- vi emoms.properties
- change orcle.sysman.eml.mntr.emdRepPwd=newplaintextpassword
- change orcle.sysman.eml.mntr.emdRepPwdEncrypted=FALSE (from true to false)
- emctl start dbconsole
- now the plain text password will be encrypted, and false will become true
for dbsnmp password change, same thing except for the below.
-
modify the file targets.xml under sysman/emd folder
Just like before, change encryptedpassword to new plaintext password and change TRUE to FALSE.
Wednesday, July 2, 2008
manually create EM db console
You have specified the following settings
Database ORACLE_HOME ................ /u01/app/oracle/product/11.1.0/db_2
Local hostname ................ hpdb1.domain.com
Listener port number ................ 1521
Database SID ................ testdb
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............
-----------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
INFO: >>>>>>>>>>> The Database Control URL is https://hpdb1.domain.com:5500/em <<<<<<<<<<<
Jul 3, 2008 11:21:57 AM oracle.sysman.emcp.EMDBPostConfig invoke
WARNING:
************************ WARNING ************************
Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted. The encryption key has been placed in the file: /u01/app/oracle/product/11.1.0/db_2/hpdb1.domain.com_testdb/sysman/config/emkey.ora. Please ensure this file is backed up as the encrypted data will become unusable if this file is lost.
***********************************************************
Enterprise Manager configuration completed successfully
Sunday, June 29, 2008
oracle listener
- lsnrctl stop/start/status
- to disable log, run 'lsnrctl', then 'set log_status off', then 'save_config'
- by default, trc_level is off
how to start EM for the datbase that was created using dbca
ORACLE_SID=test
emctl stop dbconsole
emctl start dbconsole
Oracle fast commit
- write all the database blocks the transaction changed to their respective datafiles
- write only the redo information, which typically involves much less I/O than writing the database blocks . This recording of the changes can be replayed to reproduce all the transaction's changes later, if they are needed due to a failure.
When a user commits a transaction, Oracle guarantees that the redo for those changes writes to the redo logs on disk. The actual changed database blocks will be written out to the datafiles later.
Oracle System Change Number
oracle listener and dispatcher and shared servers
- the client contacts the Listener over the network
- the Listener detects and incoming request and , based on Oracle Net configuration, determines that it's for a multithreaded server. Instead of handing the client off to a dedicated server, the Listener hands the client off to a dispatcher for the network protocol the client is using.
- The Listner introduces the client and the dispatcher by letting each know the other's network address.
- Once the client and the dispatcher know where to find each other, they communicate directly. The Listener is no longer required. The client sends each work request directly to the dispatcher.
- The dispatcher places the client's request in the request queue in the SGA
- The next available shared server process reads the request from the request queue and does the work.
- The shared server places the results for the client's request in the response queue for the dispatcher that originally submitted the request.
- The dispatcher reads the results from its queue.
- The dispatcher sends the results to the client.
Saturday, June 28, 2008
oracle data dictionary
shared server: v$queue, v$dispatcher, v$shared_server
tablespaces: dba_tablespaces, dba_data_files, dba_free_space
control files: v$controlfile, v$parameter, v$controlfile_record_section
datafiles: v$datafile, v$datafile_header, v$filestat, dba_data_files
segment : dba_segments
extents: dba_extents
redo threads, groups and numbers: v$thread, v$log, v$logfile
archiving status: v$database, v$log, v$archived_log, v$archive_dest
database instance: v$instance, v$parameter, v$system_parameter
user passwords: v$pwfile_users
processes: v$process, v$bgprocess,v$session,v$lock
rman recovery: v$recover_file
work area memory: v$pgastat, v$sysstat,v$sesstat.
Wednesday, June 25, 2008
EXEC DBMS_UTILITY.compile_schema(schema => 'JEPHE');
EXEC DBMS_UTILITY.compile_schema(schema => 'JEPHE');
run above command a few times, if the following command gets the same value before and after
running above command, then can stop run compilation.
select count(*) from dba_objects where status = 'INVALID’
generate view, procedure, function, package and trigger compilation script
select 'alter ' || object_type || ' ' || owner || '.' || object_name || ' compile' || ';' from dba_objects where object_type in ('VIEW', 'SYNONYM', 'PROCEDURE', 'FUNCTION', 'PACKAGE', 'TRIGGER') and owner in ('jephe');
db2 8.1 describe table
solution 1:
use db2inst1, then 'describe table schema.table'' e.g. describe table jephe.user_account.
solution 2:
grant select on table syscat.columns to user 'username' e.g. grant select on table syscat.columns to user jephe
Tuesday, June 24, 2008
Recompiling Invalid Schema Objects
========================
Operations such as upgrades, patches and DDL changes can invalidate schema objects. Provided these changes don't cause compilation failures the objects will be revalidated by on-demand automatic recompilation, but this can take an unacceptable time to complete, especially where complex dependencies are present. For this reason it makes sense to recompile invalid objects in advance of user calls. It also allows you to identify if any changes have broken your code base. This article presents several methods for recompiling invalid schema objects.
- Identifying Invalid Objects
- The Manual Approach
- Custom Script
- DBMS_UTILITY.compile_schema
- UTL_RECOMP
- utlrp.sql and utlprp.sql
Identifying Invalid Objects
TheDBA_OBJECTS
view can be used to identify invalid objects using the following query:With this information you can decide which of the following recompilation methods is suitable for you.COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name;
The Manual Approach
For small numbers of objects you may decide that a manual recompilation is sufficient. The following example shows the compile syntax for several object types:Notice that the package body is compiled in the same way as the package specification, with the addition of the word "BODY" at the end of the command.ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;
An alternative approach is to use the
DBMS_DDL
package to perform the recompilations:This method is limited to PL/SQL objects, so it is not applicable for views.EXEC DBMS_DDL.alter_compile('PACKAGE', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PACKAGE BODY', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PROCEDURE', 'MY_SCHEMA', 'MY_PROCEDURE');
EXEC DBMS_DDL.alter_compile('FUNCTION', 'MY_SCHEMA', 'MY_FUNCTION');
EXEC DBMS_DDL.alter_compile('TRIGGER', 'MY_SCHEMA', 'MY_TRIGGER');
Custom Script
In some situations you may have to compile many invalid objects in one go. One approach is to write a custom script to identify and compile the invalid objects. The following example identifies and recompiles invalid packages and package bodies.This approach is fine if you have a specific task in mind, but be aware that you may end up compiling some objects multiple times depending on the order they are compiled in. It is probably a better idea to use one of the methods provided by Oracle since they take the code dependencies into account.SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT owner,
object_name,
object_type,
DECODE(object_type, 'PACKAGE', 1,
'PACKAGE BODY', 2, 2) AS recompile_order
FROM dba_objects
WHERE object_type IN ('PACKAGE', 'PACKAGE BODY')
AND status != 'VALID'
ORDER BY 4)
LOOP
BEGIN
IF cur_rec.object_type = 'PACKAGE' THEN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.owner || '"."' || cur_rec.object_name || '" COMPILE';
ElSE
EXECUTE IMMEDIATE 'ALTER PACKAGE "' || cur_rec.owner ||
'"."' || cur_rec.object_name || '" COMPILE BODY';
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(cur_rec.object_type || ' : ' || cur_rec.owner ||
' : ' || cur_rec.object_name);
END;
END LOOP;
END;
/
DBMS_UTILITY.compile_schema
TheCOMPILE_SCHEMA
procedure in the DBMS_UTILITY
package compiles all procedures, functions, packages, and triggers in the specified schema. The example below shows how it is called from SQL*Plus:EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
UTL_RECOMP
TheUTL_RECOMP
package contains two procedures used to recompile invalid objects. As the names suggest, the RECOMP_SERIAL
procedure recompiles all the invalid objects one at a time, while the RECOMP_PARALLEL
procedure performs the same task in parallel using the specified number of threads. Their definitions are listed below:The usage notes for the parameters are listed below:PROCEDURE RECOMP_SERIAL(
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
PROCEDURE RECOMP_PARALLEL(
threads IN PLS_INTEGER DEFAULT NULL,
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
- schema - The schema whose invalid objects are to be recompiled. If NULL all invalid objects in the database are recompiled.
- threads - The number of threads used in a parallel operation. If NULL the value of the "job_queue_processes" parameter is used. Matching the number of available CPUs is generally a good starting point for this value.
- flags - Used for internal diagnostics and testing only.
There are a number of restrictions associated with the use of this package including:-- Schema level.
EXEC UTL_RECOMP.recomp_serial('SCOTT');
EXEC UTL_RECOMP.recomp_parallel(4, 'SCOTT');
-- Database level.
EXEC UTL_RECOMP.recomp_serial();
EXEC UTL_RECOMP.recomp_parallel(4);
-- Using job_queue_processes value.
EXEC UTL_RECOMP.recomp_parallel();
EXEC UTL_RECOMP.recomp_parallel(NULL, 'SCOTT');
- Parallel execution is perfomed using the job queue. All existing jobs are marked as disabled until the operation is complete.
- The package must be run from SQL*Plus as the SYS user, or another user with SYSDBA.
- The package expects the
STANDARD
,DBMS_STANDARD
,DBMS_JOB
andDBMS_RANDOM
to be present and valid. - Runnig DDL operations at the same time as this package may result in deadlocks.
utlrp.sql and utlprp.sql
The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on theUTL_RECOMP
package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of "0". The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows:- 0 - The level of parallelism is derived based on the CPU_COUNT parameter.
- 1 - The recompilation is run serially, one object at a time.
- N - The recompilation is run in parallel with "N" number of threads.
For further information see:
Hope this helps. Regards Tim...
oracle imp/exp
These tools are used to transfer data from one oracle database to another oracle database. You Export tool to export data from source database, and Import tool to load data into the target database. When you export tables from source database export tool will extracts the tables and puts it into the dump file. This dump file is transferred to the target database. At the target database the Import tool will copy the data from dump file to the target database.
From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are enhanced versions of original Export and Import tools.
The export dump file contains objects in the following order:
- Type definitions
- Table definitions
- Table data
- Table indexes
- Integrity constraints, views, procedures, and triggers
- Bitmap, function-based, and domain indexes
When you import the tables the import tool will perform the actions in the following order, new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data
You can run Export and Import tool in two modes
Command Line Mode
Interactive Mode
When you just type exp or imp at o/s prompt it will run in interactive mode i.e. these tools will prompt you for all the necessary input. If you supply command line arguments when calling exp or imp then it will run in command line mode
Command Line Parameters of Export tool
You can control how Export runs by entering the EXP command followed
by various arguments. To specify parameters, you use keywords:
Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
Keyword Description (Default)
--------------------------------------------------------------
USERID username/password
BUFFER size of data buffer
FILE output files (EXPDAT.DMP)
COMPRESS import into one extent (Y)
GRANTS export grants (Y)
INDEXES export indexes (Y)
DIRECT direct path (N)
LOG log file of screen output
ROWS export data rows (Y)
CONSISTENT cross-table consistency(N)
FULL export entire file (N)
OWNER list of owner usernames
TABLES list of table names
RECORDLENGTH length of IO record
INCTYPE incremental export type
RECORD track incr. export (Y)
TRIGGERS export triggers (Y)
STATISTICS analyze objects (ESTIMATE)
PARFILE parameter filename
CONSTRAINTS export constraints (Y)
OBJECT_CONSISTENT transaction set to read only during object export (N)
FEEDBACK display progress every x rows (0)
FILESIZE maximum size of each dump file
FLASHBACK_SCN SCN used to set session snapshot back to
FLASHBACK_TIME time used to get the SCN closest to the specified time
QUERY select clause used to export a subset of a table
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
TTS_FULL_CHECK perform full or partial dependency check for TTS
TABLESPACES list of tablespaces to export
TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
TEMPLATE template name which invokes iAS mode export
The Export and Import tools support four modes of operation
FULL :Exports all the objects in all schemas
OWNER :Exports objects only belonging to the given OWNER
TABLES :Exports Individual Tables
TABLESPACE :Export all objects located in a given TABLESPACE.
Example of Exporting Full Database
The following example shows how to export full database
$exp USERID=scott/tiger FULL=y FILE=myfull.dmp
In the above command, FILE option specifies the name of the dump file, FULL option specifies that you want to export the full database, USERID option specifies the user account to connect to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE privilege.
To export Objects stored in a particular schemas you can run export utility with the following arguments
$exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp
The above command will export all the objects stored in SCOTT and ALI’s schema.
To export individual tables give the following command
$exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp
This will export scott’s emp and sales tables.
Exporting Consistent Image of the tables
If you include CONSISTENT=Y option in export command argument then, Export utility will export a consistent image of the table i.e. the changes which are done to the table during export operation will not be exported.
Objects exported by export utility can only be imported by Import utility. Import utility can run in Interactive mode or command line mode.
You can let Import prompt you for parameters by entering the IMP command followed by your username/password:
Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:
Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword | Description (Default) |
USERID | username/password |
BUFFER | size of data buffer |
FILE | input files (EXPDAT.DMP) |
SHOW | just list file contents (N) |
IGNORE | ignore create errors (N) |
GRANTS | import grants (Y) |
INDEXES | import indexes (Y) |
ROWS | import data rows (Y) |
LOG | log file of screen output |
FULL | import entire file (N) |
FROMUSER | list of owner usernames |
TOUSER | list of usernames |
TABLES | list of table names |
RECORDLENGTH | length of IO record |
INCTYPE | incremental import type |
COMMIT | commit array insert (N) |
PARFILE | parameter filename |
CONSTRAINTS | import constraints (Y) |
DESTROY | overwrite tablespace data file (N) |
INDEXFILE | write table/index info to specified file |
SKIP_UNUSABLE_INDEXES | skip maintenance of unusable indexes (N) |
FEEDBACK | display progress every x rows(0) |
TOID_NOVALIDATE | skip validation of specified type ids |
FILESIZE | maximum size of each dump file |
STATISTICS | import precomputed statistics (always) |
RESUMABLE | suspend when a space related error is encountered(N) |
RESUMABLE_NAME | text string used to identify resumable statement |
RESUMABLE_TIMEOUT | wait time for RESUMABLE |
COMPILE | compile procedures, packages, and functions (Y) |
STREAMS_CONFIGURATION | import streams general metadata (Y) |
STREAMS_INSTANITATION | import streams instantiation metadata (N) |
Example Importing Individual Tables
To import individual tables from a full database export dump file give the following command
$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(emp,dept)
This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below
Export file created by EXPORT:V10.00.00 via conventional path
import done in WE8DEC character set and AL16UTF16 NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing table "DEPT" 4 rows imported
. . importing table "EMP" 14 rows imported
Import terminated successfully without warnings.
Example, Importing Tables of One User account into another User account
For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command
$imp scott/tiger FILE=mytables.dmp FROMUSER=ali TOUSER=scott
Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed.
Example Importing Tables Using Pattern Matching
Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use “%” wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet “e” and those tables whose name contains alphabet “d”
$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(a%,%d%)
Migrating a Database across platforms.
The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms.
The following steps present a general overview of how to move a database between platforms.
- As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.
SQL> SELECT tablespace_name FROM dba_tablespaces;
- As a DBA user, perform a full export from the source database, for example:
> exp system/manager FULL=y FILE=myfullexp.dmp
- Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption.
- Create a database on the target server.
- Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system.
- As a DBA user, perform a full import with the IGNORE parameter enabled:
> imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.
Monday, June 23, 2008
oracle的视图
oracle automatic memory management
Sys@Lin11g> alter system reset pga_aggregate_target;
System altered.
Sys@Lin11g> alter system reset sga_target;
System altered.
…no scope=spfile sid=’*’ is needed. This resets the parameter in spfile only, the values in memory persist.
Sunday, June 22, 2008
procedures,functions,packages
note: select all procedures from jephe tablespace.
dbms_metadata.GET_DDL
select
dbms_metadata.GET_DDL('PROCEDURE',u.object_name)
from
user_objects u
where
object_type = 'PROCEDURE';
generate 'create or replace public synonym' statement after export oracle database
几个有名的公司的DBA 面试题目
===============
1. 解释冷备份和热备份的不同点以及各自的优点
解答:热备份针对归档模式的数据库,在数据库仍旧处于工作状态时进行备份。而冷备份指在数据库关闭后,进行备份,适用于所有模式的数据库。热备 份的优点在于当备份时,数据库仍旧可以被使用并且可以将数据库恢复到任意一个时间点。冷备份的优点在于它的备份和恢复操作相当简单,并且由于冷备份的数据 库可以工作在非归档模式下,数据库性能会比归档模式稍好。(因为不必将archive log写入硬盘)
2. 你必须利用备份恢复数据库,但是你没有控制文件,该如何解决问题呢?
解答:重建控制文件,用带backup control file 子句的recover 命令恢复
数据库。
3. 如何转换init.ora到spfile?
解答:使用create spfile from pfile 命令.
4. 解释data block , extent 和 segment的区别(这里建议用英文术语)
解答:data block是数据库中最小的逻辑存储单元。当数据库的对象需要更多的物理存储空间时,连续的data block就组成了extent . 一个数据库对象
拥有的所有extents被称为该对象的segment.
5. 给出两个检查表结构的方法
解答:1。DESCRIBE命令
2. DBMS_METADATA.GET_DDL 包
6. 怎样查看数据库引擎的报错
解答:alert log.
7. 比较truncate和delete 命令
解答:两者都可以用来删除表中所有的记录。区别在于:truncate是DDL操作,它移动HWK,不需要 rollback segment .而Delete是DML操作, 需要rollback segment 且花费较长时间.
8. 使用索引的理由
解答:快速访问表中的data block
9. 给出在STAR SCHEMA中的两种表及它们分别含有的数据
解答:Fact tables 和dimension tables. fact table 包含大量的主要的信息而 dimension tables 存放对fact table 某些属性描述的信息
10. FACT Table上需要建立何种索引?
解答:位图索引 (bitmap index)
11. 给出两种相关约束?
解答:主键和外键
12. 如何在不影响子表的前提下,重建一个母表
解答:子表的外键强制实效,重建母表,激活外键
13. 解释归档和非归档模式之间的不同和它们各自的优缺点
解答:归档模式是指你可以备份所有的数据库 transactions并恢复到任意一个时间点。非归档模式则相反,不能恢复到任意一个时间点。但是非归档模式可以带来数据库性能上的少许提高
.
14. 如何建立一个备份控制文件?
解答:Alter database backup control file to trace.
15. 给出数据库正常启动所经历的几种状态 ?
解答:
STARTUP NOMOUNT – 数据库实例启动
STARTUP MOUNT - 数据库装载
STARTUP OPEN – 数据库打开
16. 哪个column可以用来区别V$视图和GV$视图?
解答: INST_ID 指明集群环境中具体的 某个instance 。
17. 如何生成explain plan?
解答:运行utlxplan.sql. 建立plan 表
针对特定SQL语句,使用 explain plan set statement_id = 'tst1' into plan_table
运行utlxplp.sql 或 utlxpls.sql察看explain plan
18. 如何增加buffer cache的命中率?
解答:在数据库较繁忙时,适用buffer cache advisory 工具,查询v$db_cache_advice . 如果有必要更改,可以使用 alter system set db_cache_size 命令
19. ORA-01555的应对方法?
解答:具体的出错信息是snapshot too old within rollback seg , 通常可以通过
增大rollback seg来解决问题。当然也需要察看一下具体造成错误的SQL文本
20. 解释$ORACLE_HOME和$ORACLE_BASE的区别?
解答:ORACLE_BASE是oracle的根目录,ORACLE_HOME是oracle产品
的目录。
21. 如何判断数据库的时区?
解答:SELECT DBTIMEZONE FROM DUAL;
22. 解释GLOBAL_NAMES设为TRUE的用途
解答:GLOBAL_NAMES指明联接数据库的方式。如果这个参数设置为TRUE,在建立数据库链接时就必须用相同的名字连结远程数据库
23。如何加密PL/SQL程序?
解答:WRAP
24. 解释FUNCTION,PROCEDURE和PACKAGE区别
解答:function 和procedure是PL/SQL代码的集合,通常为了完成一个任务。procedure 不需要返回任何值而function将返回一个值在另一
方面,Package是为了完成一个商业功能的一组function和proceudre的集合
25. 解释TABLE Function的用途
解答:TABLE Function是通过PL/SQL逻辑返回一组纪录,用于普通的表/视图。他们也用于pipeline和ETL过程。
26. 举出3种可以收集three advisory statistics
解答:Buffer Cache Advice, Segment Level Statistics, Timed Statistics
27. Audit trace 存放在哪个oracle目录结构中?
解答:unix $ORACLE_HOME/rdbms/audit Windows the event viewer
28. 解释materialized views的作用
解答:Materialized views 用于减少那些汇总,集合和分组的信息的集合数量。它们通常适合于数据仓库和DSS系统。
29. 当用户进程出错,哪个后台进程负责清理它
解答: PMON
30. 哪个后台进程刷新materialized views?
解答:The Job Queue Processes.
31. 如何判断哪个session正在连结以及它们等待的资源?
解答:V$SESSION / V$SESSION_WAIT
32. 描述什么是 redo logs
解答:Redo Logs 是用于存放数据库数据改动状况的物理和逻辑结构。可以用来修复数据库.
33. 如何进行强制LOG SWITCH?
解答:ALTER SYSTEM SWITCH LOGFILE;
34. 举出两个判断DDL改动的方法?
解答:你可以使用 Logminer 或 Streams
35. Coalescing做了什么?
解答:Coalescing针对于字典管理的tablespace进行碎片整理,将临近的小extents合并成单个的大extent.
36. TEMPORARY tablespace和PERMANENT tablespace 的区别是?
解答:A temporary tablespace 用于临时对象例如排序结构而 permanent tablespaces用来存储那些'真实'的对象(例如表,回滚段等)
37. 创建数据库时自动建立的tablespace名称?
解答:SYSTEM tablespace.
38. 创建用户时,需要赋予新用户什么权限才能使它联上数据库。
解答:CONNECT
39. 如何在tablespace里增加数据文件?
解答:ALTER TABLESPACE ADD DATAFILE SIZE
40. 如何变动数据文件的大小?
解答:ALTER DATABASE DATAFILE RESIZE ;
41. 哪个VIEW用来检查数据文件的大小?
解答: DBA_DATA_FILES
42. 哪个VIEW用来判断tablespace的剩余空间
解答:DBA_FREE_SPACE
43. 如何判断谁往表里增加了一条纪录?
解答:auditing
44. 如何重构索引?
解答: ALTER INDEX REBUILD;
45. 解释什么是Partitioning(分区)以及它的优点。
解答:Partition将大表和索引分割成更小,易于管理的分区。
46. 你刚刚编译了一个PL/SQL Package但是有错误报道,如何显示出错信息?
解答:SHOW ERRORS
47. 如何搜集表的各种状态数据?
解答: ANALYZE
The ANALYZE command.
48. 如何启动SESSION级别的TRACE
解答: DBMS_SESSION.SET_SQL_TRACE
ALTER SESSION SET SQL_TRACE = TRUE;
49. IMPORT和SQL*LOADER 这2个工具的不同点
解答:这两个ORACLE工具都是用来将数据导入数据库的。
区别是:IMPORT工具只能处理由另一个ORACLE工具EXPORT生成
的数据。而SQL*LOADER可以导入不同的ASCII格式的数据源
50。用于网络连接的2个文件?
解答: TNSNAMES.ORA and SQLNET.ORA
后面五道题,
1,什么叫Normalization.什么叫3NF,并请列出1NF和2NF..
2,什么叫revised key index
3,什么叫死锁,举一个引起数据库操作中死锁的例子
4, BYTE[] buf = BYTE[1024];
in.read(buf)
in是一个接收图像数据的网络IO流,请指出这段代码有什么问题,
并请用j***a代码改进它
5,你正在分析一个子系统的接口,发现接口很多。然后你同事劝你用Fecade,
问你用Fecade有什么好处
百度PHP/JSP职位在线笔试真题
1、请写一个程序,随机打印“0”、“1”、“2”,但保证“0”、“1”、“2”出现的概率是可配置的,例如2:1:3。
2、有一个基于数据库的通知发布系统,需要能够灵活配置通知生效的时间:
a)可以设置在某几个小时内生效,如指定"2046年12月24日 18时到25日0时生效
b)可以设置在一周的某几天发布,如可以设置周六、周日生效
c)可以设置一月的某几天生效,如可以设置5号、6号生效
d)可以设置某一月的奇数日或者偶数日生效,如1、3、5号生效
请设计数据库表结构,并简要说明设计思路。
3、为某图书馆开发在线浏览系统,使用户可以通过自定义的图书别名浏览相关联的图书内容。假设该图书馆有1000万注册用户,馆藏图书1000 万部。在线浏览系统允许用户自定义分类名称,每个分类可以包含若干部书籍。用户可以添加、删除分类,修改分类的名称(同一用户不允许有名称相同的分类), 可以在分类下添加、删除书籍,修改书籍的别名(同一分类下不允许有名称相同的别名)。现在设定每个用户最多可以自定义100个分类,每个分类最多可以包含 100部书籍。
A、假定用数据库解决存储问题,请设计相关的数据表结构,并给出设计考虑。
B、请给出下列操作的SQL语句
问题:在一个文件中有 10G 个整数,乱序排列,要求找出中位数。内存限制为 2G。只写出思路即可。
Peak Wong:
1,把整数分成256M段,每段可以用64位整数保存该段数据个数,256M*8 = 2G内存,先清0
2,读10G整数,把整数映射到256M段中,增加相应段的记数
3,扫描256M段的记数,找到中位数的段和中位数的段前面所有段的记数,可以把其他段的内存释放
4,因中位数段的可能整数取值已经比较小(如果是32bit整数,当然如果是64bit整数的话,可以再次分段),对每个整数做一个记数,再读一次10G整数,只读取中位数段对应的整数,并设置记数。
5,对新的记数扫描一次,即可找到中位数。
一:SQL tuning 类
1:列举几种表连接方式
2:不借助第三方工具,怎样查看sql的执行计划
3:如何使用CBO,CBO与RULE的区别
4:如何定位重要(消耗资源多)的SQL
5:如何跟踪某个session的SQL
6:SQL调整最关注的是什么
7:说说你对索引的认识(索引的结构、对dml影响、对查询影响、为什么提高查询性能)
8:使用索引查询一定能提高查询的性能吗?为什么
9:绑定变量是什么?绑定变量有什么优缺点?
10:如何稳定(固定)执行计划
11:和排序相关的内存在8i和9i分别怎样调整,临时表空间的作用是什么
12:存在表T(a,b,c,d),要根据字段c排序后取第21—30条记录显示,请给出sql
二:数据库基本概念类
1:pctused and pctfree 表示什么含义有什么作用
2:简单描述table / segment / extent / block之间的关系
3:描述tablespace和datafile之间的关系
4:本地管理表空间和字典管理表空间的特点,AS***有什么特点
5:回滚段的作用是什么
6:日志的作用是什么
7:SGA主要有那些部分,主要作用是什么
8racle系统进程主要有哪些,作用是什么
三:备份恢复类
1:备份如何分类
2:归档是什么含义
3:如果一个表在2004-08-04 10:30:00 被drop,在有完善的归档和备份的情况下,如何恢复
4:rman是什么,有何特点
5:standby的特点
6:对于一个要求恢复时间比较短的系统(数据库50G,每天归档5G),你如何设计备份策略
四:系统管理类
1:对于一个存在系统性能的系统,说出你的诊断处理思路
2:列举几种诊断IO、CPU、性能状况的方法
3:对statspack有何认识
4:如果系统现在需要在一个很大的表上创建一个索引,你会考虑那些因素,如何做以尽量减小对应用的影响
5:对raid10 和raid5有何认识
五:综合随意类
1:你最擅长的是oracle哪部分?
2:喜欢oracle吗?喜欢上论坛吗?或者偏好oracle的哪一部分?
3:随意说说你觉得oracle最有意思的部分或者最困难的部分
4:为何要选择做DBA呢?
参考解答:
一:SQL tuning 类
1:列举几种表连接方式
hash join/merge join/nest loop(cluster join)/index join
2:不借助第三方工具,怎样查看sql的执行计划
set autot on
explain plan set statement_id = &item_id for &sql;
select * from table(dbms_xplan.display);
http://download-west.oracle.com/ ... /b10752/ex_plan.htm
3:如何使用CBO,CBO与RULE的区别
在optimizer_mode=choose时,如果表有统计信息(分区表外),优化器将选择CBO,否则选RBO。RBO遵循简单的分级方 法学,使用15种级别要点,当接收到查询,优化器将评估使用到的要点数目, 然后选择最佳级别(最少的数量)的执行路径来运行查询。
CBO尝试找到最低成本的访问数据的方法,为了最大的吞吐量或最快的初始响应时间,计算使用不同 的执行计划的成本,并选择成本最低的一个,关于表的数据内容的统计被用于确定执行计划。
4:如何定位重要(消耗资源多)的SQL
select sql_text
from v$sql
where disk_reads > 1000 or (executions > 0 and buffer_gets/executions > 30000);
5:如何跟踪某个session的SQL
exec dbms_system.set_sql_trace_in_session(sid,serial#,&sql_trace);
select sid,serial# from v$session where sid = (select sid from v$mystat where rownum = 1);
exec dbms_system.set_ev(&sid,&serial#,&event_10046,&level_12,'');
6:SQL调整最关注的是什么
查看该SQL的response time(db block gets/consistent gets/physical reads/sorts (disk))
7:说说你对索引的认识(索引的结构、对dml影响、为什么提高查询性能)
b-tree index/bitmap index/function index/patitional index(local/global) 索引通常能提高select/update/delete的性能,会降低insert的速度,
8:使用索引查询一定能提高查询的性能吗?为什么
索引就是为了提高查询性能而存在的, 如果在查询中索引没有提高性能, 只能说是用错了索引,或者讲是场合不同
9:绑定变量是什么?绑定变量有什么优缺点?
绑定变量是相对文本变量来讲的,所谓文本变量是指在SQL直接书写查询条件,这样的SQL在不同条件下需要反复解析,绑定变量是指使用变量来代 替直接书写条件,查询bind value在运行时传递,然后绑定执行。优点是减少硬解析,降低CPU的争用,节省shared_pool ;缺点是不能使用histogram,sql优化比较困难
10:如何稳定(固定)执行计划
query_rewrite_enabled = true
star_transformation_enabled = true
optimizer_features_enable = 9.2.0
创建并使用stored outline
http://download-west.oracle.com/ ... /outlines.htm#26854
11:和排序相关的内存在8i和9i分别怎样调整,临时表空间的作用是什么
8i中sort_area_size/sort_area_retained_size决定了排序所需要的内存
如果排序操作不能在sort_area_size中完成,就会用到temp表空间
9i中如果workarea_size_policy=auto时,
排序在pga内进行,通常pga_aggregate_target的1/20可以用来进行disk sort;
如果workarea_size_policy=manual时,排序需要的内存由sort_area_size决定
在执行order by/group by/distinct/union/create index/index rebuild/minus等操作时,
如果在pga或sort_area_size中不能完成,排序将在临时表空间进行(disk sort),
临时表空间主要作用就是完成系统中的disk sort.
12:存在表T(a,b,c,d),要根据字段c排序后取第21—30条记录显示,请给出sql
create table t(a number(,b number(,c number(,d number();
/
begin
for i in 1 .. 300 loop
insert into t values(mod(i,2),i/2,dbms_random.value(1,300),i/4);
end loop;
end;
/
select * from (select c.*,rownum as rn from (select * from t order by c desc) c) where rn between 21 and 30;
/
select * from (select * from test order by c desc) x where rownum < 30
minus
select * from (select * from test order by c desc) y where rownum <>
相比之 minus性能较差
数据库基本概念类
1:pctused and pctfree 表示什么含义有什么作用
pctused与pctfree控制数据块是否出现在freelist中,
pctfree控制数据块中保留用于update的空间,当数据块中的free space小于pctfree设置的空间时,
该数据块从freelist中去掉,当块由于dml操作free space大于pct_used设置的空间时,该数据库块将
被添加在freelist链表中。
2:简单描述table / segment / extent / block之间的关系
table创建时,默认创建了一个data segment,
每个data segment含有min extents指定的extents数,
每个extent据据表空间的存储参数分配一定数量的blocks
3:描述tablespace和datafile之间的关系
一个tablespace可以有一个或多个datafile,每个datafile只能在一个tablespace内,
table中的数据,通过hash算法分布在tablespace中的各个datafile中,
tablespace是逻辑上的概念,datafile则在物理上储存了数据库的种种对象。
4:本地管理表空间和字典管理表空间的特点,AS***有什么特点
本地管理表空间(Locally Managed Tablespace简称LMT)
8i以后出现的一种新的表空间的管理模式,通过位图来管理表空间的空间使用。
字典管理表空间(Dictionary-Managed Tablespace简称DMT)
8i以前包括以后都还可以使用的一种表空间管理模式,通过数据字典管理表空间的空间使用。
动段空间管理(AS***),
它首次出现在Oracle920里有了AS***,链接列表freelist被位图所取代,它是一个二进制的数组,
能够迅速有效地管理存储扩展和剩余区块(free block),因此能够改善分段存储本质,
AS***表空间上创建的段还有另外一个称呼叫Bitmap Managed Segments(BMB 段)。
5:回滚段的作用是什么
事务回滚:当事务修改表中数据的时候,该数据修改前的值(即前影像)会存放在回滚段中,
当用户回滚事务(ROLLBACK)时,ORACLE将会利用回滚段中的数据前影像来将修改的数据恢复到原来的值。
事务恢复:当事务正在处理的时候,例程失败,回滚段的信息保存在undo表空间中,
ORACLE将在下次打开数据库时利用回滚来恢复未提交的数据。
读一致性:当一个会话正在修改数据时,其他的会话将看不到该会话未提交的修改。
当一个语句正在执行时,该语句将看不到从该语句开始执行后的未提交的修改(语句级读一致性)
当ORACLE执行SELECT语句时,ORACLE依照当前的系统改变号(SYSTEM CHANGE NUMBER-SCN)
来保证任何前于当前SCN的未提交的改变不被该语句处理。可以想象:当一个长时间的查询正在执行时,
若其他会话改变了该查询要查询的某个数据块,ORACLE将利用回滚段的数据前影像来构造一个读一致性视图。
6:日志的作用是什么
记录数据库事务,最大限度地保证数据的一致性与安全性
重做日志文件:含对数据库所做的更改记录,这样万一出现故障可以启用数据恢复,一个数据库至少需要两个重做日志文件
归档日志文件:是重做日志文件的脱机副本,这些副本可能对于从介质失败中进行恢复很必要。
7:SGA主要有那些部分,主要作用是什么
SGA:db_cache/shared_pool/large_pool/j***a_pool
db_cache:
数据库缓存(Block Buffer)对于Oracle数据库的运转和性能起着非常关键的作用,
它占据Oracle数据库SGA(系统共享内存区)的主要部分。Oracle数据库通过使用LRU
算法,将最近访问的数据块存放到缓存中,从而优化对磁盘数据的访问.
shared_pool:
共享池的大小对于Oracle 性能来说都是很重要的。
共享池中保存数据字典高速缓冲和完全解析或编译的的PL/SQL 块和SQL 语句及控制结构
large_pool:
使用MTS配置时,因为要在SGA中分配UGA来保持用户的会话,就是用Large_pool来保持这个会话内存
使用RMAN做备份的时候,要使用Large_pool这个内存结构来做磁盘I/O缓存器
j***a_pool:
为j***a procedure预备的内存区域,如果没有使用j***a proc,j***a_pool不是必须的
8 Oracle系统进程主要有哪些,作用是什么
数据写进程(dbwr):负责将更改的数据从数据库缓冲区高速缓存写入数据文件
日志写进程(lgwr):将重做日志缓冲区中的更改写入在线重做日志文件
系统监控(***on) :检查数据库的一致性如有必要还会在数据库打开时启动数据库的恢复
进程监控(pmon) :负责在一个Oracle 进程失败时清理资源
检查点进程(chpt):负责在每当缓冲区高速缓存中的更改永久地记录在数据库中时,更新控制文件和数据文件中的数据库状态信息。
归档进程(arcn) :在每次日志切换时把已满的日志组进行备份或归档
作业调度器(cjq) :负责将调度与执行系统中已定义好的job,完成一些预定义的工作.
恢复进程(reco) :保证分布式事务的一致性,在分布式事务中,要么同时commit,要么同时rollback;
备份恢复类
1:备份如何分类
逻辑备份:exp/imp
物理备份:
RMAN备份
full backup/incremental backup(累积/差异)
热备份:alter tablespace begin/end backup;
冷备份:脱机备份(database shutdown)
2:归档是什么含义
关于归档日志:Oracle要将填满的在线日志文件组归档时,则要建立归档日志(archived redo log)。
其对数据库备份和恢复有下列用处:
数据库后备以及在线和归档日志文件,在操作系统和磁盘故障中可保证全部提交的事物可被恢复。
在数据库打开和正常系统使用下,如果归档日志是永久保存,在线后备可以进行和使用。
数据库可运行在两种不同方式下:
NOARCHIVELOG方式或ARCHIVELOG 方式数据库在NOARCHIVELOG方式下使用时,不能进行在线日志的归档,如果数据库在ARCHIVELOG方式下运行,可实施在线日志的归档。3: 如果一个表在2004-08-04 10:30:00 被drop,在有完善的归档和备份的情况下,如何恢复?
手工拷贝回所有备份的数据文件
startup mount;
sql alter database recover automatic until time '2004-08-04:10:30:00';
alter database open resetlogs;
4:rman是什么,有何特点?
RMAN(Recovery Manager)是DBA的一个重要工具,用于备份、还原和恢复oracle数据库, RMAN 可以用来备份和恢复数据库文件、归档日志、控制文件、系统参数文件,也可以用来执行完全或不完全的数据库恢复。RMAN有三种不同的用户接口:
COMMAND LINE方式、GUI 方式(集成在OEM 中的备份管理器)、API 方式(用于集成到第三方的备份软件中)。
具有如下特点:
1)功能类似物理备份,但比物理备份强大N倍;
2)可以压缩空块;
3)可以在块水平上实现增量;
4)可以把备份的输出打包成备份集,也可以按固定大小分割备份集;
5)备份与恢复的过程可以自动管理;
6)可以使用脚本(存在Recovery catalog 中)
7)可以做坏块监测
5:standby的特点
备用数据库(standby database):ORACLE推出的一种高可用性(HIGH ***AILABLE)数据库方案,
在主节点与备用节点间通过日志同步来保证数据的同步,备用节点作为主节点的备份
可以实现快速切换与灾难性恢复,从920开始,还开始支持物理与逻辑备用服务器。
9i中的三种数据保护模式分别是:
1)、MAXIMIZE PROTECTION :最大数据保护与无数据分歧,LGWR将同时传送到备用节点,
在主节点事务确认之前,备用节点也必须完全收到日志数据。如果网络不好,引起LGWR不能传送数据,将引起严重的性能问题,导致主节点DOWN机。
2)、MAXIMIZE ***AILABILITY :无数据丢失模式,允许数据分歧,允许异步传送。
正常情况下运行在最大保护模式,在主节点与备用节点的网络断开或连接不正常时,自动切换到最大性能模式,
主节点的操作还是可以继续的。在网络不好的情况下有较大的性能影响。
3)、MAXIMIZE PERFORMANCE:这种模式应当可以说是从8i继承过来的备用服务器模式,异步传送,
无数据同步检查,可能丢失数据,但是能获得主节点的最大性能。9i在配置DATA GUARD的时候默认就是MAXIMIZE PERFORMANCE
6:对于一个要求恢复时间比较短的系统(数据库50G,每天归档5G),你如何设计备份策略
rman/每月一号 level 0 每周末/周三 level 1 其它每天level 2
四:系统管理类
对于一个存在系统性能的系统,说出你的诊断处理思路
1 做statspack收集系统相关信息
了解系统大致情况/确定是否存在参数设置不合适的地方/查看top 5 event/查看top sql等
2 查v$system_event/v$session_event/v$session_wait
从v$system_event开始,确定需要什么资源(db file sequential read)等
深入研究v$session_event,确定等待事件涉及的会话
从v$session_wait确定详细的资源争用情况(p1-p3的值:file_id/block_id/blocks等)
3 通过v$sql/v$sqltext/v$sqlarea表确定disk_reads、(buffer_gets/executions)值较大的SQL
2:列举几种诊断IO、CPU、性能状况的方法
top/vmstat
statspack
sql_trace/tkprof
查v$system_event/v$session_event/v$session_wait
查v$sqlarea(disk_reads或buffer_gets/executions较大的SQL)
3:对statspack有何认识
StapSpack是Oracle公司提供的一个收集数据库运行性能指标的软件包,该软件包从8i起,在9i、10g都有显著的增强 。该软件包的辅助表(存储相关参数与收集的性能指标的表)由最初的25个增长到43个。收集级别参数由原来的3个(0、5、10)增加到5个(0、5、 6、7、10)通过分析收集的性能指标,数据库管理员可以详细地了解数据库目前的运行情况,对数据库实例、等待事件、SQL等进行优化调整。利用 statspack收集的snapshot,可以统计制作数据库的各种性能指标的统计趋势图表。
4:如果系统现在需要在一个很大的表上创建一个索引,你会考虑那些因素,如何做以尽量减小对应用的影响
在系统比较空闲时;nologging选项(如果有dataguard则不可以使用nologging)大的sort_ared_size或pga_aggregate_target较大
5:对raid1+0 和raid5有何认识
RAID 10(或称RAID 1+0)与RAID 0+1不同,它是用硬盘驱动器先组成RAID 1阵列,然后在RAID 1阵列之间再组成RAID 0阵列。RAID 10模式同RAID 0+1模式一样具有良好的数据传输性能,但却比RAID 0+1具有更高的可靠性。RAID 10阵列的实际容量为M×n/2,磁盘利用率为50%。RAID 10也需要至少4个硬盘驱动器构成,因而价格昂贵。 RAID 10的可靠性同RAID 1一样,但由于RAID 10硬盘驱动器之间有数据分割,因而数据传输性能优良。RAID 5与RAID 3很相似,不同之处在于RAID 5的奇偶校验信息也同数据一样被分割保存到所有的硬盘驱动器,而不是写入一个指定的硬盘驱动器,从而消除了单个奇偶校验硬盘驱动器的瓶颈问题。RAID 5磁盘阵列的性能比RAID 3有所提高,但仍然需要至少3块硬盘驱动器。其实际容量为M×(n-1),磁盘利用率为(n-1)/n 。