Monday, April 23, 2018



Oracle12c R2 Multitenant Container Database Architecture

Oracle12c R2 Database Architecture

New Features in Oracle Database 12c Release 2 (
The following are changes in the Oracle Multitenant option documentation for Oracle Database 12c Release 2 (
New Features
The following features are new in this Oracle12c second release:
  • Application containers
An application container is an optional component of a multitenant container database (CDB) that consists of an application root and the application PDBs associated with it. An application container stores data for one or more applications.
  • Application common objects
Besides, application common objects are created in an application root and are shared with the application PDBs that belong to the application root.
  • Support for thousands of pluggable databases (PDBs) in a single CDB
A CDB can contain up to 4,096 PDBs; oh, what a great improvement!
  • Use different character sets for PDBs
When the character set of the CDB root is AL32UTF8, any container in the CDB can use a character set that is different from the CDB root and different from other containers in the CDB, which is great for multiple deployment of website content for localization and globalization purposes as appropriate.
  • Relocate a PDB from one CDB to another
A PDB can be easily relocated in one operation with minimal down time.
  • Proxy PDB
Indeed, a proxy PDB references a PDB in a different CDB and provides fully functional access to the referenced PDB, which represents a fundamental improvement.
  • Hot PDB cloning
Now, the source PDB can be in open read/write mode during a PDB clone operation.
  • Rename services during PDB creation
Similarly, the SERVICE_NAME_CONVERT clause of the CREATE PLUGGABLE DATABASE statement renames the user-defined services of the new PDB based on the service names of the source PDB.
  • Switch to a specific service for a container in a CDB
The DBA can specify a service name in an ALTER SESSION SET CONTAINER statement.
  • Manage the memory usage of PDBs in a CDB
The DBA can configure guarantees and limits for SGA and PGA memory, using PDB initialization parameters.
  • Limit the I/O generated by specific PDBs
Conveniently, two new initialization parameters, MAX_IOPS and MAX_MBPS, enable the administrator to limit disk I/O generated by a PDB. MAX_IOPS limits the number of I/O operations, and MAX_MBPS limits the megabytes for I/O operations.
  • PDB performance profiles
The DBA can specify Resource Manager directives for a set of PDBs using PDB performance profiles.
  • Monitor PDBs managed by Oracle Database Resource Manager
A set of dynamic performance views enables an administrator to monitor the results of his Oracle Database Resource Manager settings for PDBs.
  • Prioritize PDB upgrades
The DBA can prioritize the PDBs in a CDB when he upgrades the CDB. The PDBs with higher priority are upgraded before PDBs with lower priority.
  • CDB undo mode
A CDB can run in local undo mode or shared undo mode. Local undo mode means that every container in the CDB uses local undo. Shared undo mode means that there is one active undo tablespace for a single-instance CDB. For an Oracle RAC CDB, there is one active undo tablespace for each instance.
  • Parallelized PDB creation
The PARALLEL clause of the CREATE PLUGGABLE DATABASE statement specifies whether to use parallel execution servers during PDB creation and, optionally, the degree of parallelism.
  • Unplugging PDBs and plugging in PDBs with an archive file
A PDB can be unplugged into compressed archive of the XML file that describes the PDB and the files used by the PDB (such as the data files and wallet file). The archive file has a .pdb extension, and it can be used to plug the PDB into a CDB or application container.
  • PDB refresh
The DBA can create a PDB as a refreshable clone and refresh the PDB with changes made to the source PDB.
  • Improved support for default tablespace specification during PDB creation
The DBA can specify a default tablespace for a PDB that is created using techniques such as cloning and plugging in the PDB. Previously, a default tablespace could be specified only if the PDB was created from PDB$SEED.
  • Extended USER_TABLESPACES clause during PDB creation
The creation mode of user tablespaces can be different than the creation mode of the PDB. For example, during PDB creation, the user tablespaces can move a tablespace’s files even when file copy is specified for the PDB.
Previous Improvements in Oracle Database 12c Release 1 (
The following are changes in the Oracle Multitenant option for Oracle Database 12c Release 1 (
New Features
The following features are new in this release:
  • Preserving the open mode of PDBs when the CDB restarts
Off course, the administrator can preserve the open mode of one or more PDBs when the CDB restarts by using the ALTER PLUGGABLE DATABASE SQL statement with a pdb_save_or_discard_state clause.
An administrator can use this clause to separate the data for multiple schemas into different PDBs. For example, assume that each schema in a non-CDB uses a separate tablespace. When the DBA moves a non-CDB to a PDB, and when the non-CDB has schemas that supported different applications, he can use this clause to separate the data belonging to each schema into a separate PDB,
  • Excluding data when cloning a PDB
The NO DATA clause of the CREATE PLUGGABLE DATABASE statement specifies that a PDB's data model definition is cloned but not the PDB's data.
  • Default Oracle Managed Files file system directory or Oracle ASM disk group for a PDB's files
The CREATE_FILE_DEST clause specifies the default location.
  • Create a PDB by cloning a non-CDB
The DBA can create a PDB by cloning a non-CDB with a CREATE PLUGGABLE DATABASE statement that includes the FROM clause.
  • The logging_clause of the CREATE PLUGGABLE DATABASE and ALTER PLUGGABLE DATABASE statement
This clause specifies the logging attribute of the PDB. The logging attribute controls whether certain DML operations are logged in the redo log file (LOGGING) or not (NOLOGGING).
  • The pdb_force_logging_clause of the ALTER PLUGGABLE DATABASE statement
This clause places a PDB into force logging or force nologging mode or takes a PDB out of force logging or force nologging mode.
  • The STANDBYS clause of the CREATE PLUGGABLE DATABASE statement
This clause specifies whether the new PDB is included in standby CDBs.
  • Querying user-created tables and views across all PDBs
The CONTAINERS clause enables administrators to query user-created tables and views across all PDBs in a CDB.
New features in Oracle18c R1
Oracle18c Release 1 Database SGA (basic view)
The following are changes to be highlited in Oracle 18c Multitenant, release
  • CDB fleet
The newly established CDB fleet [concept] is a collection of different CDBs that can be managed as one logical CDB.
  • PDB snapshot carousel
PDB snapshot is a point-in-time copy of a PDB. Thus, when a PDB is enabled for snapshots, the administrator can create multiple snapshots (point-in-time copies) of the PDB. The library of snapshots is called a PDB snapshot carousel. The DBA can quickly clone a new PDB based on any snapshot in the carousel. In this way, the DBA can perform point-in-time recovery to any snapshot in the carousel, or rapidly create a PDB by cloning any snapshot.
  • Logical partitioning
A container map enables a session to issue SQL statements that are routed to the appropriate PDB, depending on the value of a predicate used in the SQL statement. Indeed, the partitioning column in the map table does not need to match a column in the metadata-linked table. For example, if the table sales is enabled for the container map pdb_map_tbl, and if sales does not have the column used to partition pdb_map_tbl, then queries with the predicate CONTAINERS(sales) are still routed to the PDBs specified in the map table.
  • Refreshable PDB switchover
refreshable clone PDB is a read-only clone that can periodically synchronize with its source PDB. The DBA can reverse the roles, transforming the source PDB into the clone and the clone into the source. This technique can be useful for load balancing. Also, if the source PDB fails, then administrators can resume operations on the clone PDB, rendering a CDB-level Oracle Data Guard failover unnecessary.
  • Lockdown profile enhancements
The DBA can create, alter, or drop lockdown profiles in application containers. Also, administrators can create lockdown profiles based on a static or a dynamic base profile.
  • DBCA enhancements
The DBA can use DBCA to clone a local PDB or duplicate a CDB. Duplication is only supported in silent mode.
  • Usable backups of non-CDBs and relocated PDBs
When administrators are cloning a non-CDB as a PDB or relocating a PDB, the DBA can use the DBMS_PDB.EXPORTRMANBACKUP procedure to export RMAN backup metadata into the PDB dictionary. This metadata enables backups of the source non-CDB or PDB to be usable for restore and recovery of the target PDB.
  • RMAN duplication of a PDB to another CDB
The DBA can clone a PDB from a source CDB to an existing CDB that is open read/write.
  • Relocation of sessions during planned maintenance
Application Continuity can drain database sessions during planned maintenance when the application submits a connection test, at request boundaries, and at good places to fail over. The relocation is transparent to applications. This feature is on by default for all maintenance operations invoked at the database service and PDB levels: stop service, relocate service, relocate PDB, and stop PDB..
  • Copying a PDB in an Oracle Data Guard environment
Suitably, when performing a remote clone in a primary database, or plugging in a PDB in a primary database, administrators can set initialization parameters in a standby database that automates copying the data files for the newly created PDB.
  • Parallel statement queuing at the PDB level
A DBA can configure parallel statement queuing for a PDB just as for a non-PDB using the PARALLEL_SERVERS_TARGET initialization parameter. At the PDB level, the default is based on the CPU_COUNT setting for the PDB. In contrast, at the CDB level, the default value is the value of the PARALLEL_MAX_SERVERS initialization parameter.
  • Split mirror clone PDBs
When a PDB resides in Oracle ASM, administrators can use a split mirroring technique to clone a PDB. The cloned PDB is independent of the original PDB. The principal use case is to rapidly provision test and development PDBs in an Oracle ASM environment.

Tuesday, January 30, 2018

Friday, August 11, 2017




Whether your company is rather small or you are part of a major corporation, Oracle Database Appliance (ODA) can accelerate, enhance, and improve your database services at any desirable level. ODA supports RAC, RAC OneNode and Enterprise Standalone databases. Having ODA means that DBAs can work faster, more efficiently and providing the agility that developers and software engineering need in the workplace for every major project to be on time. Database services leads and DBAs can attain further efficiency based on the key ODA framework, involving simplicity, pre-built optimization and affordability at any SLA, as well as providing the ambiance required to create the right productivity environment whether being used in a remote office or branch office, for development or testing, or simply as a solution in a box, ODA surpasses the dreamt expectations on performance, reliability and availability at all times.  It further provides capabilities for full-stack hardware refresh, database consolidation, and backup and recovery. Besides, ODA supports other Oracle technologies such as Transparent Data Encryption (TDE), in integration with BI tools such as Hyperion and applications deployment such as PeopleSoft and Oracle E-Business Suite.

In principle, the Oracle Database Appliance uses twenty hard drives for storing user data. These disks are 600 GB SAS hard drives, allowing for a total of 12 TB of raw storage. They are hot-pluggable, front mounted, and are accessible to each of the two servers in the Oracle Database Appliance.
Likewise, the Oracle Database Appliance is engineered to tolerate hardware component failures. The onboard storage subsystem is designed for maximum availability. Whenever a server loses access to the disks, the other server will still have access.
When a new Oracle Database Appliance is configured for use, Oracle Automatic Storage Management (ASM) is utilized to create and manage the underlying tablespaces. The ASM +DATA and +RECO disk groups are created during the install of Oracle Database Appliance. When configuring the appliance, DBAs have the options of “External Backups” or “Internal Backups”.  A +RECO tablespace will be larger if the “Internal Backup” option is selected during the corresponding installation process.

Performance Architecture
The testing for this white paper was performed on an Oracle Database Appliance X6-2 system surpasses the capabilities of the Oracle Database Appliance X4-2 System, which consists of two X86 servers, each with two 12-core Intel E5-2697 CPUs and 256GB of memory. The two nodes use direct attached storage that is comprised of twenty 900 GB, 10,000 rpm SAS hard disk drives and four 200 GB SAS solid state disks (SSDs). Thus the Oracle Database Appliance X4-2 provides a total of 512 GB of memory, 48 CPU cores, 18TB of raw HDD storage and 800GB of SSD storage. With the addition of the Expansion Storage Shelf, the available storage can be doubled.

More specifically, the Oracle Database Appliance X6-2S and Oracle Database Appliance X6-2M hardware are engineered as single rack-mountable systems that provide the performance benefits associated with the latest generation Intel® Xeon® processors and NVM Express (NVMe) flash storage. Indeed, the Oracle Database Appliance X6-2S is powered by one 10-core Intel® Xeon® processor E5-2630 v4 and 128 GB of main memory, expandable to 384 GB. The Oracle Database Appliance X6-2M doubles the processor and memory resources by offering two 10-core Intel® Xeon® processors E5- 2630 v4 and 256 GB of main memory, expandable up to 768 GB. Both systems come configured with 6.4 TB of high-bandwidth NVM Express (NVMe) flash for database storage and offer the option to double the storage capacity to 12.8 TB of NVM Express (NVMe) flash.
Indeed, the Oracle Database Appliance provides various options for attaching network-based storage. It currently contains a total of six 1GbE ports and two 10GbE ports. All ports are bonded at the kernel level to provide high availability in an active/passive configuration. The Oracle Database Appliance utilizes four bonded network configurations:
  Net0 and Net1-Network ports for 1GbE public network.

            o Normally used for the database public network interface
            o Eth2 and Eth3, bonded as “bond0”
  PCIe0–Network ports for10GbE public network

            o Normally used for the database public network interface when 10GbE is required
            o Eth8 and Eth9, bonded as “Xbond0”
PCIe1–Additional 1GbE network interface ports

            o Normally used for NFS or backup networking
            o Eth4 and Eth5 bonded as “bond1”
            o Eth6 and Eth7 bonded as “bond2”
Eth0 and Eth1 – internal interconnect

High Availability
The Oracle Database Appliance is an engineered system that is designed from the hardware through the software stack to ensure high availability of the shared disk subsystem. Each Oracle Database Appliance server has redundant paths to the disks in the event of a hardware component failure. Besides, each server can access the disks independently of the other, which means one server can be down while the other server still has access to the disks.
Oracle Direct NFS
Oracle provides the ability to manage NFS using a feature called Oracle Direct NFS (dNFS). Oracle Direct NFS implements NFS V3 protocol within the Oracle database kernel itself. Oracle Direct NFS client overcomes many of the challenges associated with using NFS with the Oracle Database with simple configuration, better performance than traditional NFS clients, and offers consistent configuration across platforms.
Technological innovation in Oracle compression assists customers in reducing the size of large data volumes, allowing for DBAs and administrators to significantly reduce their overall database storage footprint with compression for all types of data, namely: relational (table), unstructured (file), or backup data. Compression benefits include:
   Reduced Live OLTP data size
   Reduced Backup Size
   Optimally decreased Disaster or Standby database size
   Minimized Export Dump size
   Reduced disk I/O while reading data blocks ( without overhead while reading )
   Reduced network traffic whiles ending archive redo logs to DR site.
Hybrid Columnar Compression 

Currently, with Oracle Database Appliance version 2.2 and higher and Sun ZFS Storage Appliance (ZFSSA), data can now be compressed using Hybrid Columnar Compression (HCC). This is quite beneficial because with HCC, it is possible to store user data in significantly less space and retrieve data with less scan IO. With HCC warehouse compression, it is possible to attain 6 to 10 times in storage savings and with HCC archive compression you can have 15 to 70 times in storage savings. For both archive and warehouse compression, there are LOW and HIGH settings that you can choose from. The increased storage savings may cause data load-times to increase insignificantly. Therefore, LOW should be chosen for environments where load time service levels are more critical than query performance. 
HCC will allow you to maximize your storage capabilities and account for accelerated data growth without sacrificing performance. 

ODA Architectural Enhancements and Customization
Customers can use Oracle certified NFS-attached storage with Oracle Database Appliance to store database files for both read and write operations. This external storage can be used to further extend the storage offered by Oracle Database Appliance, e.g., for the purpose of:
• Expanding beyond the 4TB limit: place additional data onNFS attached storage.
• Backups: create database disk backups external to the Oracle Database Appliance.
Expanding Oracle Database Appliance Storage
• Storage Tiering: place frequently accessed table on internal SAS disk, and less frequently used tables on NFS. You can partition the large tables and move the older partitions to an NFS attached tablespace based on usage
• Enable Hybrid Columnar Compression (HCC): see dramatic compression and performance gains using HCC on ZFS Storage Appliance
VLAN Support
VLAN Support provides Network Security Isolation for Multiple Workloads Sharing Common Network
• Servers have finite number of networks, while ODA X6-2-HA has 2 bonded public network interfaces.
• When multiple networks are required, there needs to be a way to share the network interfaces provided.
• Support for VLANs enable the sharing of a common network interface, but still provide security isolation since it cannot sniff packets of a different workload.

Oracle Data Guard Strategy

Customizing Oracle Data Guard with ODA provides an approach to MAA, since ODA provides a pre-built, pre-tuned and ready-to-use cluster database system that includes servers, storage, and software.

Data Guard provides a extra-level of disaster recovery and in some business process model can also be part of the company's daily engineering and production strategy.

Benefits of Using Oracle Data Guard

Among the key benefits, we can name the following:
·      Migration to Oracle Database Appliance
·      Disaster Recovery
·      High-Availability
·      Database Rolling Upgrades
·      Offloading Workload Activities, including read-only queries, backups, and block repair, among others.
·      Snapshot Standby

Strategies and Practices for Setup

Among the practices for setup, a lead DBA can utilize the following approaches, namely:

Matching the primary and standby configuration

In order to maintain consistent service levels and to use the primary and standby databases seamlessly, it is important to match the resources, setup, and configuration of the two systems as much as possible.  This choice can involve the following options, namely:

Using Dedicated Network for Stand by Traffic

With this choice, Oracle Database Appliance is provided pre-built with multiple redundant network interfaces.  The specific options comprehend:
·      Running Primary and Standby Database on Separate Oracle Database Appliances
·      Running Primary and Standby Database in Same Configuration
·      Sizing Primary and Standby Instances Similar to Each Other
·      Pre-configuring Primary and Standby Databases for Role Transition

Configuring Flashback Database on both Primary and Standby Databases

The Flashback Database feature enables rapid role transitions and reduces the effort required to reestablish database roles after a transition.

Using Dedicated Network for Standby Traffic
Oracle Database Appliance comes pre-built with multiple redundant network interfaces.
Offloading Certain Workloads to Standby

Oracle Recovery Manager (RMAN) works consistently and transparently across the primary and standby databases.
Utilizing Oracle Active Data Guard
Oracle Active Data Guard allows the standby database to be open for read-only operations while managed recovery (redo transmission and application on the standby) is concurrently active.
Reviewing Oracle Maximum Availability Architecture (MAA) Best practices for Oracle Database
Based on the kind deployment and usage of the Data Guard environment, you may find the following additional best practices for Oracle Data Guard useful.
--> The following is a summary of an ODA Lab, as provided by Oracle University.