MySQL Architecture - Part 1 - Basics

      MySQL Architecture 


For any DBA, Its important and mandatory to understand the basic architecture of database . It is the basic building block for any DBA to understand the basic and build their skills over this . In Today's blog , We will discuss about the architecture of the MySQL Database. 

Overview of  MySQL Architecture.

MySQL is based on tiered architecture,consisting of both subsystems and support components that interact with each othe to read,parse,and execute quaries,and to cache and return query results.

MySQL architecture consists of five primary subsystems that work together to respond to a request made to MySQL database server.



1)     Query Engine

 

SQL Interface

The SQL interface provides the mechanisms to receive commands and transmit results to the

user. The MySQL SQL interface was built to the ANSI SQL standard and accepts the same basic

SQL statements as most ANSI-compliant database servers. Although many of the SQL commands

supported in MySQL have options that are not ANSI standard, the MySQL developers have

stayed very close to the ANSI SQL standard.

Connections to the database server are received from the network communication pathways

and a thread is created for each. The threaded process is the heart of the executable pathway in

the MySQL server. MySQL is built as a true multithreaded application whereby each thread

executes independently of the other threads (except for certain helper threads). The incoming

SQL command is stored in a class structure and the results are transmitted to the client by

writing the results out to the network communication protocols. Once a thread has been created,

the MySQL server attempts to parse the SQL command and store the parts in the internal data

structure


Parser

When a client issues a query, a new thread is created and the SQL statement is forwarded to the

parser for syntactic validation (or rejection due to errors). The MySQL parser is implemented

using a large Lex-YACC script that is compiled with Bison. The parser constructs a query structure

used to represent the query statement (SQL) in memory as a tree structure (also called an

abstract syntax tree) that can be used to execute the query.


Query Optimizer

The MySQL query optimizer subsystem is considered by some to be misnamed. The optimizer

used is a SELECT-PROJECT-JOIN strategy that attempts to restructure the query by first doing

any restrictions (SELECT) to narrow the number of tuples to work with, then performs the

projections to reduce the number of attributes (fields) in the resulting tuples, and finally evaluates

any join conditions. While not considered a member of the extremely complicated query

optimizer category, the SELECT-PROJECT-JOIN strategy falls into the category of heuristic

optimizers. In this case, the heuristics (rules) are simply

• Horizontally eliminate extra data by evaluating the expressions in the WHERE (HAVING)

clause.

• Vertically eliminate extra data by limiting the data to the attributes specified in the

attribute list. The exception is the storage of the attributes used in the join clause that

may not be kept in the final query.

• Evaluate join expressions.

This results in a strategy that ensures a known-good access method to retrieve data in an

efficient manner. Despite critical reviews, the SELECT-PROJECT-JOIN strategy has proven

effective at executing the typical queries found in transaction processing.

The first step in the optimizer is to check for the existence of tables and access control by

the user. If there are errors, the appropriate error message is returned and control returns to

the thread manager, or listener. Once the correct tables have been identified, they are opened

and the appropriate locks are applied for concurrency control.

Once all of the maintenance and setup tasks are complete, the optimizer uses the internal

query structure and evaluates the WHERE conditions (a restrict operation) of the query.

Results are returned as temporary tables to prepare for the next step. If UNION operators are

present, the optimizer executes the SELECT portions of all statements in a loop before continuing.

The next step in the optimizer is to execute the projections. These are executed in a similar

manner as the restrict portions, again storing the intermediate results as temporary tables and

saving only those attributes specified in the column specification in the SELECT statement.

Lastly, the structure is analyzed for any JOIN conditions that are built using the join class, and

then the join::optimize() method is called. At this stage the query is optimized by evaluating

the expressions and eliminating any conditions that result in dead branches or always true or

always false conditions (as well as many other similar optimizations). The optimizer is attempting

to eliminate any known-bad conditions in the query before executing the join. This is done

because joins are the most expensive and time consuming of all of the relational operators. It

is also important to note that the join optimization step is performed for all queries that have a

WHERE or HAVING clause regardless of whether there are any join conditions. This enables

developers to concentrate all of the expression evaluation code in one place. Once the join

optimization is complete, the optimizer uses a series of conditional statements to route the

query to the appropriate library method for execution.


Query Execution

Execution of the query is handled by a set of library methods designed to implement a particular

query. For example, the mysql_insert() method is designed to insert data. Likewise, there

is a mysql_select() method designed to find and return data matching the WHERE clause. This

library of execution methods is located in a variety of source code files under a file of a similar

name (e.g., sql_insert.cc or sql_select.cc). All of these methods have as a parameter a thread

object that permits the method to access the internal query structure and eases execution.

Results from each of the execution methods are returned using the network communication

pathways library. The query execution library methods are clearly implemented using the

interpretative model of query execution


Query Cache

While not its own subsystem, the query cache should be considered a vital part of the query

optimization and execution subsystem. The query cache is a marvelous invention that caches

not only the query structure but also the query results themselves. This enables the system to

check for frequently used queries and shortcut the entire query optimization and execution

stages altogether. This is another of the technologies that is unique to MySQL. Other database

system cache queries, but no others cache the actual results. As you can appreciate, the query

cache must also allow for situations where the results are “dirty” in the sense that something

has changed since the last time the query was run (e.g., an INSERT, UPDATE, or DELETE was run

against the base table) and that the cached queries may need to be occasionally purged.


2)   Buffer Manager/Cache and Buffers

The caching and buffers subsystem is responsible for ensuring that the most frequently used

data (or structures, as you will see) are available in the most efficient manner possible. In other

words, the data must be resident or ready to read at all times. The caches dramatically increase

the response time for requests for that data because the data is in memory and thus no additional

disk access is necessary to retrieve it. The cache subsystem was created to encapsulate all of the

caching and buffering into a loosely coupled set of library functions. Although you will find the

caches implemented in several different source code files, they are considered part of the same

subsystem.

A number of caches are implemented in this subsystem. Most of the cache mechanisms

use the same or similar concept of storing data as structures in a linked list. The caches are

implemented in different portions of the code to tailor the implementation to the type of data

that is being cached. Let’s look at each of the caches.

 

Table Cache

The table cache was created to minimize the overhead in opening, reading, and closing tables

(the .FRM files on disk). For this reason, the table cache is designed to store metadata about the

tables in memory. This makes it much faster for a thread to read the schema of the table without

having to reopen the file every time. Each thread has its own list of table cache structures. This

permits the threads to maintain their own views of the tables so that if one thread is altering the

schema of a table (but has not committed the changes) another thread may use that table with

the original schema. The structure used is a simple one that includes all of the metadata information

for a table. The structures are stored in a linked list in memory and associated with each

thread.

 

Record Cache

The record cache was created to enhance sequential reads from the storage engines. Thus the

record cache is usually only used during table scans. It works like a read-ahead buffer by retrieving

a block of data at a time, thus resulting in fewer disk accesses during the scan. Fewer disk

accesses generally equates to improved performance. Interestingly, the record cache is also

used in writing data sequentially by writing the new (or altered) data to the cache first and then

writing the cache to disk when full. In this way write performance is improved as well. This

sequential behavior (called locality of reference) is the main reason the record cache is most

often used with the MyISAM storage engine, although it is not limited to MyISAM. The record

cache is implemented in an agnostic manner that doesn’t interfere with the code used to

access the storage engine API. Developers don’t have to do anything to take advantage of the

record cache as it is implemented within the layers of the API.

 

Key Cache

The key cache is a buffer for frequently used index data. In this case, it is a block of data for the

index file (B-tree) and is used exclusively for MyISAM tables (the .MYI files on disk). The indexes

themselves are stored as linked lists within the key cache structure. A key cache is created when

a MyISAM table is opened for the first time. The key cache is accessed on every index read. If an

index is found in the cache, it is read from there; otherwise, a new index block must be read

from disk and placed into the cache. However, the cache has a limited size and is tunable by

changing the key_cache_block_size configuration variable. Thus not all blocks of the index file

will fit into memory. So how does the system keep track of which blocks have been used?

The cache implements a monitoring system to keep track of how frequent the index blocks

are used. The key cache has been implemented to keep track of how “warm” the index blocks are.

Warm in this case refers to how many times the index block has been accessed over time. Values

for warm include BLOCK_COLD, BLOCK_WARM, and BLOCK_HOT. As the blocks cool off and new blocks

become warm, the cold blocks are purged and the warm blocks added. This strategy is a least

recently used (LRU) page-replacement strategy—the same algorithm used for virtual memory

management and disk buffering in operating systems—that has been proven to be remarkably

efficient even in the face of much more sophisticated page-replacement algorithms. In a similar

way, the key cache keeps track of the index blocks that have changed (called getting “dirty”).

When a dirty block is purged, its data is written back to the index file on disk before being replaced.

Conversely, when a clean block is purged it is simply removed from memory.

 

Privilege Cache

The privilege cache is used to store grant data on a user account. This data is stored in the same

manner as an access control list (ACL), which lists all of the privileges a user has for an object

in the system. The privilege cache is implemented as a structure stored in a first in, last out

(FILO) hash table. Data for the cache is gathered when the grant tables are read during user

authentication and initialization. It is important to store this data in memory as it saves a lot of

time reading the grant tables.

 

Hostname Cache

The hostname cache is another of the helper caches, like the privilege cache. It too is implemented

as a stack of a structure. It contains the hostnames of all the connections to the server.

It may seem surprising, but this data is frequently requested and therefore in high demand and

a candidate for a dedicated cache.

 

Miscellaneous

A number of other small cache mechanisms are implemented throughout the MySQL source

code. One example is the join buffer cache used during complex join operations. For example,

some join operations require comparing one tuple to all the tuples in the second table. A cache

in this case can store the tuples read so that the join can be implemented without having to

reread the second table into memory multiple times.


3)    The Storage Manager

 The storage manager interfaces with the operating system to write data to the disk efficiently. Because the storage functions reside in a separate subsystem, the MYSQL engine operates at a level of abstraction away from the operating system. The storage  manager writes to disk all of the data in the user tables. indexes, and logs as well as the internal system data.

 

4)    The Transaction Manager

The function of the Transaction manager is to facilitate concurrency in data access. This subsystem provides a locking facility to ensure that multiple simultaneous users access the data in consistent way. Without corrupting or damaging the data. Transaction control takes place via  the lock manager subcomponent, which places and release locks on various objects being used in transaction.


5)    Recovery Manager.

The Recovery Manager’s job is to keep copies of data for retrieval later, in case of loss of data. It also logs commands that modify  the data and other significant events inside the database

So far, only InnoDB and BDB table handlers.


Subsystem Interaction and control Flow

The Query engine requests the data to be read or written to buffer manager to satisfy a users query.it depend on the transaction manager for locking of the data so that the concurrency is ensured. To perform table creation and drop operations, the query engine accesses the storage manager directly, bypassing the buffer manager.to create or delete the files in the file system.

The buffer manager caches data from the storage manager for efficient retrieval by the query engine. It depends on the transaction manager to check the locking status of data before it performs any modification action.

The Transaction manager depends on the Query cache  and storage  manager to place locks  on data in memory and in the file system.

The recovery manager  uses the Storage manager to store command/event logs and backups of the data in the file system. It depends on the transaction manager to obtain locks on the  log files being written. The recovery manager also needs to use the buffer manager during recovery from crashes.

The storage manager depends on the operating system file system for persistent storage and retrieval of data. It depends on the transaction manager to obtain locking status information.



Please do check our next Blog on MySQL Architecture - Part 2 - Locking and Concurrency

Comments

Popular posts from this blog

Restore MySQL Database from mysqlbackup

Oracle Database 19C Installation on Windows Server 2016

MySQL InnoDB Cluster Restore/Create Issue : - Dba.createCluster: Group Replication failed to start: MySQL Error 3094 (HY000)