A brief overview of Oracle administration

network1

ABSTRACT

The DBA is the administrator who plans, designs, implements, tests, operates, and maintains databases for an organization. The DBA wears many hats: there are numerous types of DBAs that contribute to the overall database functionality. This paper will examine the most common types of database administrators and the duties of each. Additionally, we will examine the database monitoring tools and Optimal Flexible Architecture, and how the DBA can administer security, performance tuning, and backup and recovery of the database.

Different database administrator types

The production database administrator works with other information systems administrators and specializes in the creation and management of database tables, backup and recovery, security, and performance of databases. Every day database administrative tasks are the duties of this position; and duties include monitoring database and ensuring availability, CPU, memory, utilization, and system I/O among numerous other things. Security is a large part of the DBA repertoire of which user accounts, roles, and profiles are a large part. Table space creation and control file backup are an important task that the production database administrator works with to ensure quick recovery from any corruption or data loss within the database.

Interaction with the analysts (programmers) would involve backup planning, as many backups incorrectly timed can interfere with database operation. This is because locked files can become corrupted if the backup runs at a time in which the file is written to. Files that are being written to are locked of course, but if a backup application is trying to copy that file at that time, errors within the backup logs can result: it is best to schedule backups at a time of reduced functionality.

The development DBA is usually an analyst who spends a lot of time programming and configuring applications and interfaces that connect to the database. Supporting the application development life cycle, this type of DBA often builds the database environment that is not yet operational: thus no immediate impact to business. The DBA follows project schedules and Gantt charts frequently and works closely with management and programmers to ensure support of business applications via the database structures. This includes working with the “application team to create and maintain effective database-coupled application logic [which includes] stored procedures, triggers, and user-defined functions (UDFs) (Mullins, 2003).

Patching is a constant in any information system; and patches must be test within a development or test system before being integrated into production. Therefore thorough testing of patches must be undertaken to examine the effects that this would have on the production system.

Skill in normalization is paramount in this position. Following the Database development Life Cycle (DBLC), similar to the systems development life cycle, this position works with management and top analysts in planning the database to realize business rules and goals.

OFA Standard

Every database should follow this standard. The Optimal Flexible Architecture standard is a set of guidelines that defines file naming and configuration that will ensure more structured Oracle installations. It also provides standards that can improve system performance by “spreading I/O functions across separate devices by separating data from software” (Powell & McCullough-Dieter, 2007). Bottlenecks can be avoided by utilizing OFA, as this will improve performance by sorting elements into distinctive directories that are placed within separate devices. In short, OFA is a more logical way of file and file indexing methodology that enhances system I/O.

Oracle OFA structure is installed into the directory ORACLE_BASE. On Windows systems, it follows the convention – c:\oracle\product\11.x. Within the UNIX/Linux architecture, we have /app/oracle/product/11.x. The database file architecture then includes within the base:

Admin: contains ini files and logs

db_1: the main Oracle database installation

client_1: client installation

oradata/<db name>: datafiles for each database, control and redo logs.

flash_recovery_area: backup and recovery and archive logs.

Each database function has its own area, or location of operation and “is stored separately” (Powell & McCullough-Dieter, 2007) from other types of system files.

All Oracle binaries are stored under the directory structure known as ORACLE_HOME. Each database will have its own ORACLE_HOME format; i.e.: /app/oracle/product/11.x/db1 (UNIX OS) or c:\oracle\product\11.x\db1. Within the OFA naming standard of Oracle, there are three important files located within the ORACLE_BASE/Oradata directory:

  • Control file (.ctl) – associated with only one database, this administrative file contains the database name and unique identifier (DBID), creation timestamp, data file information and redo log files, tablespace information, and RMAN backup information.
  • Redo log file (.log) – The database has two or more of these that consists of redo entries, or records, that can be used to restore recent changes to database data if needed. These redo log files are known collectively as the redo log.
  • Datafile (.dbf) – Belonging to each tablespace within a database, the datafile contains tables and indexes for the database.

As we have seen, Oracle is a very organized system that segregates its various elements to increase I/O and performance as well as recoverability.

Database security and auditing

Besides regular user permissions who can access the database with fundamental read/write permissions, there are two administrative privileges within the Oracle database: SYSDBA and SYSOPER. These permissions are high level operation levels that allow one to start up and shut down the database, and create database entities. SYSDBA is for fully empowered database administration, allowing all database permissions and access. The SYSOPER permission also has all administrative permissions, with the exception that it does not permit the ability to look at user data.

System auditing is used for supervising access, updates, and deletions. Oracle has three types of auditing: Statement auditing that requires the AUDIT SYSTEM privilege can be used to monitor changes to a table by a certain user account. This can be done within this context: AUDIT UPDATE TABLE by <user name>;. Privilege auditing presents the ability to monitor track table creation. This can be used to monitor any activity by any user. Object auditing can be used to monitor a certain object. This can be used to monitor queries to a particular table. The syntax for this is AUDIT SELECT ON <TableName>;.

Database monitoring

Monitoring of the database can be done via alerts. These alerts notify the administrator when a threshold has been reached. Alerts can be set to perform actions such as running a script when a threshold has been reached. An example of this is with a script that shrinks tablespace objects whenever a usage alert has been triggered.

Another means of database monitoring is performance self-diagnostics and automatic database diagnostics monitor (ADDM). Oracle “diagnose[s] its own performance” (Oracle, 2009) and can decide how to resolve identified anomalies. Snapshots are a part of ADDM and are used for performance comparison to other time periods, and the snapshots are stored up to 8 days before being purged to free space for newer snapshots.

Monitoring general database state and workload is done by the administrator via the database administration home page on the local system. CPU and memory utilization are presented, and all tools are available to shutdown, edit users, monitor queries, configure flashback, monitor workload, SQL response time, and much more.

Managing alerts includes viewing thresholds, clearing alerts, and “setting up direct alert notification” (Oracle, 2009).

Performance tuning

Part of proactive maintenance, performance tuning is the result of good monitoring practice. From the prediction and tracking of performance problems via the monitoring tools within the database control area, tuning the database begins. The monitors are also advisors: ADDM as previously mentioned, is an advisor as well as a monitor. Other advisors within Advisor Central within 10 g Database Control are SQL Tuning Advisor, SQL Access Advisor, Memory Advisor, MTTR Advisor, Segment Advisor, and Undo Management.

Backup and recovery

There are a few types of database backups in Oracle. Hot backup is performed when the database is in full operation, containing open files. Snapshots are used to backup the database, or smaller parts of the database one file at a time. This way, files can be restored into a running database individually or within a group. Backup tools for this include export and import data pump utilities, tablespace copies, RMAN (Recovery Manager), and Oracle Enterprise Manager and the Database Control (Powell & McCullough-Dieter, 2007). One effective means of continuity within an unstable environment is using the standby database: a backup database that is identical to the production database that can be failed over in seconds to provide continuous availability. This is also known as a physical standby database.

The most common tools of backup and recovery measures are checkpoints, redo logs, archive logs, and flashback. According to Powell and Dieter, a combination of archive logs and redo logs provides the ability of recovery from a point in time as long as the archive logs have been retained in the database structure (2007). Flashback is flash recovery that contains flashback data for a specific time of operation. The difference between physical and flashback recovery is that physical recovery can entail the restore of the entire database, where flashback includes the recovery of specific parts such as a table or a table’s entities.

Conclusion

Research skills are important for a successful database administrator; as the occupation includes examining anomalies, bugs, patch research, and normalization development. Performance monitoring and tuning are a daily task for the administrative DBA, and both performance and developmental DBAs should be well versed in the process of database normalization, standards, monitoring, and backup and recovery tools and methods.

References

McLean, C. (2006). Database Administrators: Multitasking for Advancement. Certification Magazine, 8(1), 30-40. Retrieved from Research Starters – Business database.

Mullins, C. (2003). DBAs Need Different Skills in Development and Production. The DBA Corner. Database Trends and Applications. Retrieved February 5, 2010 from http://www.craigsmullins.com/dbta_023.htm

Powell, G, & McCullough-Dieter, C. (2007). Oracle 10g Database Administrator: Implementation & Administration. Boston: Thomson Course Technology.

Oracle 11g. (2009). Monitoring and Tuning the Database. Oracle Database 2 Day DBA. 11g Release 2 (11.2). Retrieved February 13, 2010 from http://download.oracle.com/docs/cd/E11882_01/server.112/e10897/montune.htm#CACCIHAB

Advertisements

Key principles of risk management

Risk management is an important element of organizational security and business continuity. This includes identifying vulnerabilities in the business and planning and executing the steps to ensure the security and continuity of information systems in the organization.

To protect an organization from internal and outside threat, knowledge of those threats must be acquired. One must know about ones’ enemy when it comes to defense. The more that is known about the opponent, the better one is prepared against it. Within an organization, threat assessment must be a constant topic of research. To formulate and implement a risk assessment strategy, an organized plan must be produced that defines the classes of information assets, then defines threats, and finally defines control strategies.

Identifying and classifying information assets

This phase includes identifying people and groups, procedures, data, software, and hardware and networking entities. Asset identification is to be included in each of these objects. People include both employees and non-employees, such as outside contractors and visitors. Each employee should have an accurate record of security level definitions and training. Records of outside contractors and vendors should be kept up to date. Procedures or courses of action, within each group should be reviewed and enforced. Procedures are defined by policy, and policy should always be enforced: as policy is only as effective as the level of enforcement. Classifying data is a task for information systems administrators, and includes prioritizing information systems and determining the levels of risk in each. Of course, there is no such thing as a system that has no risk, as any information system can be compromised and due diligence must be taken on every system; no matter the importance of it. If it is part of the organization, it must be secured – from Smartphone to mainframe. Data is a lifeblood of an organization and not only must be secured, but be kept from the danger of corruption by fragmentation, snooping, dropped packets during transport, and exposure to outside entities. Data can be classified by “owner, creator, and manager…size of data structure; location” (Whitman & Mattord, 2005). Software must be inventoried and the licensing accurate. This includes operating systems and applications on servers as well as end user PC’s. Monitoring software that queries software installations and configurations on each server and user workstations is essential to software inventory and control of software policy and security. Finally, hardware and network equipment should be accurately inventoried and scanned for compliance to the latest component and firmware updates. Network hardware inventory must include IP and MAC addresses, serial numbers, software version (most hacks are due to an exploit within the firmware revision), and the controlling entity. Determining who controls what element is important, as having an organized administrative strategy will ensure that no network devices remain unmanaged.

Defining threat

Threat assessment includes many variables. Threats can originate within every entity in the organization as well as the outside. Human error is common, especially within data integrity: data can accidentally be deleted, moved, or corrupted. Another threat is copyright infringement; which can be the result of piracy or lack of licensing compliance. Deliberate acts of electronic trespassing, vandalism or extortion are a threat. Theft in its various forms such as stealing data or hardware is a risk. Worms, virus, and other malware are a constant threat to system integrity, and are often at the hands of end users that breach policy by downloading and installing software that has not been authorized by systems administration. Forces of nature are another threat: any kind of natural disaster is possible, and this requires another plan – disaster recovery planning. Many threats originate from the outside such as network providers that grant WAN connectivity to data and telecommunications networks. Sometimes these core connections are interrupted, diminishing quality of service (QoS). Finally, systems hardware and software as well as obsolete systems can fail resulting in corrupted and lost data, as well as the infamous systems downtime: the bane of any CIO.

Control strategies

Risk within information security is a competitive disadvantage. Once this is realized, four major control strategies within risk control can be undertaken after the initial vulnerability assessment.

Avoidance

Prevention is the first step to security. This includes eradicating vulnerability within systems and implementing policy that restricts the access to assets. Management mandates policy and ensures that “certain procedures are always followed” (Whitman & Mattord, 2005). To ensure avoidance also includes the education of employees on new and existing technologies. This will enable safer and more experienced use of information systems. Applying technology within the systems administrative staff should include strong password policy to control access.

Transference

The saying that there is always someone better at it applies here. Transference is the process of outsourcing the risk to other organizations or processes. For example, outsourcing security management could put the administration of it into the hands of more experienced people which can free a business of these duties so that business can focus on its core operations – the product.

Mitigation

Through preparation and diligent pre-planning, the impact “caused by the exploitation of vulnerability” (Whitman & Mattord, 2005) can be minimized. This includes the disaster recovery plan, business continuity plan, and the incident response plan. These plans require substantial time and resources to construct and are a large part of any organizations enterprise architecture. This is the stage where the constant auditing and project management within information security planning happens.

Acceptance

This methodology is manifested when the cost of defending something is not justifiable. Sometimes it is more economical to replace something instead of the expense of protection.

Selecting a risk control strategy relies on the initial feasibility study which includes a cost benefit analysis that determines the worth of protecting the information assets. The costs that will be reviewed are purchasing, service costs, maintenance, and training expenses. Information security is “required because the technology applied to information creates risk” (Blakley, McDermott & Geer). Risk management is the core of today’s business continuity.

References:
Whitman, M., Mattord, H. (2005). Risk Management. Principles of Information Security. p.116. Thomson.

Blakley, B., McDermott, E., Geer., D. (2001). Information Security is Information Risk Management. Proceedings of the 2001 Workshop on New Security Paradigms Paradigms. ACM.

Common Information Model

Within a heterogeneous storage system, it is difficult to administer all systems with most proprietary storage management system solutions, as the proprietary management systems can lack critical elements of the Common Information Model. The CIM is a common standard among storage vendors that can enable heterogeneous operation within storage environments. The CIM, a standard defined by the SNIA, is an open standard of storage management. Also known as Bluefin, CIM is an “alternative to proprietary SAN management of multivendor storage networks from a single console” (Clark, 2003) and eliminates dependence on proprietary vendor API’s. This enables more flexibility and efficiency within storage systems administration for the client. Most common is the web-based administration within a CIM-enabled management environment which simplifies management from a single console.

Reference
Clark, T. (2003). Using the SNIA Shared Storage Model. Designing Storage Area Networks. Chapter 10. Addison Wesley.

Change Management within Storage

In an article published by Network World, Change Management reins in Sans, Ron Alon explains that it is prudent to implement change management software within the SAN to acquire the SAN architecture, store the configuration of that architecture, and notify administrators when a change or failure occurs. By automation, Alon asserts that “organizations can gain dramatic improvements in risk and cost reduction of their SAN environments” (Alon, 2004). Without the change management software, the SAN administrator would have to manage change events with time-consuming manual techniques using legacy spreadsheets and other documentation software. The change management engine will automate all monitoring processes: “First, the software establishes a baseline map of the entire SAN environment that is stored in a configuration data repository. The map captures device data and configuration information, including all physical devices, cables, and logical access paths and dependencies between components such as storage devices, servers and switches. The server performs all the mapping and continuously communicates with all SAN devices. It analyzes any configuration change and correlates events and device data to generate a uniform, accurate picture of a SAN and its access paths” (Alon, 2004). The engine also analyzes the SAN and tests for unauthorized paths and vulnerabilities within zoning and clustering and notifying administrators. The software is a “checks-and-balances” layer that permits simulation of changes and examines the probable effects of those changes and monitors them.

Reference:
Alon, R. (Dec 20, 2004). Change management reins in SANs. Network World, p.29. Retrieved June 17, 2009, from Academic OneFile via Gale

Yet Another Hierarchical Object-oriented Oracle

yahoo= YAHOO. This is what the name means, as coined by the Stanford grads who invented it. …just a thought.

Data storage as a service

This has been around for years, but has not been popular in the past because of the apprehension of storing data outside of one’s domain.
Security questions abound (understandably) with this service, but presently security measures are being improved constantly. A few years back, I had a small business on the side in which I offered remote backup to college students and small business. I got few customers as most were apprehensive about putting their data in someone else’s hands. In the last grad class we talked a little about this and came to the conclusion that the public is becoming more comfortable with online storage, as we are now stashing our photographs and other personal (but non-confidential) data online at sites such as Flicker. Business has been doing the same with thier data for years. I see data storage as a service as an area of growth among big business, as data storage is exploding within the organization, and “quality of service levels… are needed for secure data access” (Barker & Massiglia, 2002). This has already created a new administrative position: the data administrator – usually another name for the SAN admin.

Reference:
Barker, R., Massiglia, P. (2002). Data storage as a service. Storage Area Network Essentials. John Wiley & Sons.