-
What's New in MariaDB Server 10.3
What's New in MariaDB Server 10.3
maxmether
Fri, 05/25/2018 - 18:52
We are happy to announce the general availability (GA) of MariaDB Server 10.3! This release is a big milestone for the development of MariaDB Server and is the result of a huge effort by the development team and contributors – thanks to everyone involved! With our previous major release of MariaDB Server 10.2 last year, we started a journey of adding more enterprise-grade features to better close the gap with proprietary databases. With MariaDB Server 10.3 we take a huge leap on that journey by being the first enterprise open source database to add features like temporal data processing (through system versioning) as well as compatibility with Oracle sequences and Oracle PL/SQL. At the same time, we want to stay true to our open source and innovative roots by adding support for new storage engines to be able to more easily adapt to different workloads and different hardware available to users. This path allows us to adapt quickly to an ever-changing landscape where new innovations are being created at a constantly accelerated pace. This is our greatest release yet and, with this release, we want to put our stake in the Enterprise database category.
The key enhancements of MariaDB Server 10.3 can be put in the following categories:
Temporal data processing (system-versioned tables)
Oracle compatibility features
Purpose-built storage engines
Temporal Data Processing
Temporal data processing through our system versioning feature is one of the more exciting additions in the MariaDB Server 10.3 release. With system versioning, the database will keep track of all changes made to every row in the table. The old versions of the rows are not visible through normal query syntax, but by using a special syntax you can access all of the old versions of the row. This capability lends itself to a large number of use cases, anything from auditing and forensics (finding the exact point-in-time result set from a suspicious query executed some time ago) to things like analyzing changes in your data, comparing customer preferences year to year and a multitude of other possibilities. This feature can be turned on per table and the history can also be deleted periodically so that your table doesn’t grow indefinitely. The use cases are both exciting and endless! For more information on system versioning read our manual or this blog on automatic data versioning.
Oracle Compatibility
As the demand for MariaDB Server has increased in larger enterprises we have also seen a need for features that are readily available in proprietary databases. In order for MariaDB to be easier to use for DBAs and skilled database engineers from other products, we wanted to add as much compatibility as possible.
In MariaDB Server 10.3, we added a new stored routine syntax in addition to the already existing MariaDB SQL/PSM syntax. We now support MariaDB SQL/PL which is a syntax designed to be compatible with Oracle PL/SQL. This way, migrating existing applications is a lot easier and existing skills can be used without complex retraining. In the process we also added several new constructs to our stored procedure support like new ROW data types.
The new syntax isn’t the only new compatibility feature, we also added sequences in order to have a more flexible way of creating unique primary keys than the already existing auto_increment feature. This feature is fully compatible with Oracle sequences. Learn more about how to use sequences in this blog post. Together with features added previously (like window functions, common table expressions (CTEs), etc.) we now have a deep set of enterprise-grade features that can tackle any type of application need.
Purpose-Built Storage Engines
At MariaDB, we believe in using the right tool for the right trade. However, we don’t feel that you need to change everything in order to achieve that. We have a unique architecture with pluggable storage engines that allows the user to adapt the database to the use case and workload without changing the main characteristics and features. We believe that this flexibility serves the best interest of the user and we will work on further advancing this with future versions of MariaDB. This architecture will enable both the community and our team to innovate further by adding storage engines designed for new hardware and new use cases. In MariaDB Server 10.3, we introduce two new storage engines that are declared stable, MyRocks and Spider.
MyRocks comes from a collaboration with Facebook where the storage engine is built on top of RocksDB – an open source project mainly maintained by Facebook. The MyRocks storage engine is built using a log-structured merge tree (LSM tree) architecture and is well adapted to high write workloads. MyRocks also has a very high compression ratio and is built to optimize the lifetime of SSD disks.
Spider is a storage engine designed for extreme scale. The Spider storage engine allows you to shard a specific table across multiple nodes. It uses the partitioning protocol to define how the table should be split up and each individual shard will then reside on a remote MariaDB Server that will only handle queries for that particular shard. With Spider you get almost linear scaling for INSERTS and key lookup read queries.
And there’s more ...
In addition to this, we have added a multitude of features to help speed up schema operations (like instant ADD COLUMN) and other optimizations and compatibility features. The ADD COLUMN feature is another example of our collaboration with customers and partners including Alibaba, Tencent and ServiceNow, and is just the beginning of making heavy DDL operations more usable.
Want all the details? Get a full list of features in MariaDB Server 10.3.
Get MariaDB Server 10.3 as part of the MariaDB TX 3.0 download – now available.
MariaDB Releases
Login
or
Register
to post comments
-
The Perfect Server - Ubuntu 18.04 (Bionic Beaver) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3.1
This tutorial shows how to install an Ubuntu 18.04 LTS (Bionic Beaver) server with Apache, BIND, Dovecot and ISPConfig 3.1. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).
-
Recommended fix for MySQL Checker Query Script for Tungsten Clustering
We have identified an issue in a script that is executed as part of the Tungsten Clustering solution. The script itself executes a small query against the internal tables used by the replicator in order to help identify the current status and latency of the replication process. We believe this script may occasionally fail under some very specific conditions, which would not normally be an issue, but the knock on effect is to create Out of Memory errors and instability in the way identifying the current replication state is handled.
The issue has been fixed in the upcoming 6.0.1 and 5.3.2 releases, but we have identified that customers on versions of the Tungsten Clustering solution from v4.0 upwards would benefit from using an updated version of the script.
If you are using Tungsten Clustering 4.x or higher, it is recommended that you apply this patch to improve the stability of your clustering installation.
The script can be updated in a number of different ways, including using a script we have written that will update the in place versions of the file. Due to the nature of the script and the installation process, we recommend that even if you manually update or use the update script to fix existing installations, you should also update the version in the package or staging directories so that updates and new installations do not wipe out the changes.
Note: Updating the live version of the file in an installed directory saves you having to reinstall or restart any of the clustering components. When the file has been updated, the manager will automatically reload and use the new version.
Updating installed directories using a script
The easiest way to apply the fix is to use our script, which simply finds all files named *mysql_checker_query.sql* and swaps out just one line.
Since the standard location for staging extraction is /opt/continuent/software, we expect to be able to patch the files found in the staging directories along with those found in the installed directory.
To apply the patch to both the installed files and staging templates by using our update script:
Download the script file we have written from fix_mysql_checker_query.sh
Make the script executable:
chmod +x fix_mysql_checker_query.sh
Run the script, by default it searches for files in /opt/continuent, and you may also specify the directory. For example:
./fix_mysql_checker_query.sh
or
./fix_mysql_checker_query.sh /opt/tungsten
The script will update both the installed files and any staging templates located under the target directory.
Updating via staging directory (requires tpm update)
To update a version of the software extracted from the distribution tarball:
Download the mysql_checker_query.sql.tpl file.
Change to the extracted software directory
Copy the downloaded file into ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl
$ cp ~/mysql_checker_query.sql.tpl ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl
If you want to update your software now, bearing in mind that this will restart the managers and connectors, perform an update through tpm to update the distribution:
./tools/tpm update --replace-release
Updating an installed directory manually
To update the script by hand, you will need to update a version of the script for every service within your installation.
Download the mysql_checker_query.sql.tpl file.
Copy the file into a new file named mysql_checker_query.sql
Edit the file and change each instance of the string @{MGR_REPL_SCHEMA} to the name of the tracking schema. For example, if you service is called ‘east’ then the tracking schema is ‘tungsten_east’
Copy the temporary file into the installed configuration directory. For example,
/opt/continuent/tungsten/tungsten-manager/conf/mysql_checker_query.sql
-
Ensure better defaults with InnoDB Dedicated server.
We have seen with most of the consulting projects where the customer might be having a dedicated DB (MySQL) server ,but running with a default configuration, without any optimisation for underlying hardware, “An idle hardware is similar to idle money will give you no returns”.
Well again if you are from a non-DBA background and you have chosen InnoDB as your engine of choice. The next question will be, what are the major variable that needs to be tuned for the available hardware? here is the answer for you
In this post, We are going to detail about the variable innodb_dedicated_server in MySQL 8.0.11. This variable solves our above query and of course MySQL 8.0 comes with best default values for production use cases.
If innodb_dedicated_server is enabled in my.cnf, MySQL will tune the below variables as per the memory in the server.
innodb_buffer_pool_size
innodb_log_file_size
innodb_flush_method
Overview:
innodb_buffer_pool_size
Server Memory
Buffer Pool Size
< 1 G
128 MiB (the innodb_buffer_pool_size default)
<= 4 G
Detected server memory * 0.5
> 4 G
Detected server memory * 0.75
Innodb_buffer_pool_size can be set up to 80% of physical RAM in offline (at startup). From MySQL 5.7 we can increase the value in the online method without need of restarting the server.
innodb_log_file_size
Server Memory
Log File Size
< 1 GB
48 MiB (the innodb_log_file_size default)
<= 4 GB
128 MiB
<= 8 GB
512 MiB
<= 16 GB
1024 MiB
> 16 GB
2048 MiB
Innodb_log_file_size is primarily used for recovery of DB server, in case of a crash. The recommended value for redo log sizing is that log should have the ability to hold at least last one-hour of the transaction for better stability and durability. Having small size can slow down the write performance. ie., commit latency, by waiting for transaction logs to get freed up.
innodb_flush_method
The flush method is set to O_DIRECT_NO_FSYNC when innodb_dedicated_server is enabled. If the O_DIRECT_NO_FSYNC setting is not available for the server, the default innodb_flush_method setting is used with respect to the architecture.
Testing innodb_dedicated_server:
Configuration:
RAM – 4G
CPU – 2
DISK – 10G SSD
Variable disabled (default):
Memory:
root@localhost :(none) > \! free -m
total used free shared buff/cache available
Mem: 3849 630 1706 8 512 1009
Swap: 0 0 0
my.cnf values:
root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb
innodb_buffer_pool_size = 1G
innodb_log_file_size = 96M
innodb_flush_log_trx_at_commit = 2
innodb_flush_method = O_DIRECT
Global values of the variables without innodb_dedicated_server variable.
root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G
*************************** 1. row ***************************
Innodb Dedicated Server: 0
Innodb Buffer Pool Size: 1.00 GB
Innodb Log file size: 96.00 MB
Innodb Flush Method: O_DIRECT
1 row in set (0.00 sec)
There is no change in variables because innodb_dedicated_server is disabled and the other values are in place as per the cnf values.
Variable enabled:
my.cnf values:
root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb
innodb_dedicated_server = 1
#innodb_buffer_pool_size = 1G
#innodb_log_file_size = 96M
innodb_flush_log_trx_at_commit = 2
#innodb_flush_method = O_DIRECT
The above variables will not effect by innodb_dedicated_server if they are explicitly mentioned in my.cnf. So I have commented these variables out and restarted the mysql server.
Global values of the variables with innodb_dedicated_server variable.
root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G
*************************** 1. row ***************************
Innodb Dedicated Server: 1
Innodb Buffer Pool Size: 2.00 GB
Innodb Log file size: 128.00 MB
Innodb Flush Method: O_DIRECT_NO_FSYNC
1 row in set (0.00 sec)
So if innodb_dedicated_server is enabled those variables will be affected as the above formula.
We will receive a warning message in MySQL error log if we specify the variable in my.cnf along with innodb_dedicated_server.
Error log:
[Warning] [MY-012358] [InnoDB] InnoDB: Option innodb_dedicated_server
is ignored for innodb_buffer_pool_size because
innodb_buffer_pool_size=419430400 is specified explicitly.
Key Takeaways:
Do not enable this variable if the server is not dedicated for MySQL.
Do not enable the dependent variables if innodb_dedicated_server is enabled.
This variable can ensure that you are using your hardware better but there are more that can be tuned based on the db usage patterns.
-
Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together
Please join Percona’s Principal Architect Alex Rubin as he presents MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together to give you a highly available cluster database environment on Tuesday, May 29th at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).
Register Now
In this webinar, Alex will discuss how to deploy a highly available MySQL database environment on Kubernetes/Openshift using Percona XtraDB Cluster (PXC) together with MySQL Proxy to implement read/write splitting.
If you have never used Kubernetes and Openshift, or never used PXC / MySQL Proxy, Alex will do a quick introduction to these technologies. There will also be a demo where Alex sets up a PXC cluster with ProxySQL in Openshift Origin and tries to break it.
By the end of this webinar you will have a better understanding of:
How to deploy Percona XtraDB Cluster with ProxySQL for HA solutions
How to leverage Kubernetes/Openshift in your environments
How to troubleshoot performance issues
Register for the webinar
Alexander Rubin, Principal Consultant
Alexander joined Percona in 2013. Alexander worked with MySQL since 2000 as DBA and Application Developer. Before joining Percona he was doing MySQL consulting as a principal consultant for over 7 years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.
The post Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together appeared first on Percona Database Performance Blog.
|