1. Release V8.3.4 (January 20, 2019) |
1.1. | Further improvements to Kubernetes support |
| Support for running VoltDB in Kubernetes, and in particular resilience during cluster startup and XDCR
durability, has been improved in several ways. These changes improve the reliability of XDCR in Kubernetes
environments and especially environments using glusterfs. However, these changes are equally applicable to other
environments where node stability is questionable during startup. Specific improvements include: Errors and stack traces appearing during startup have been eliminated. Files written to the database root directory during startup are now protected against
corruption due to nodes failing during the startup process.
|
1.2. | Additional improvements |
| The following limitations in previous versions have been resolved: Enabling SSL encryption on VoltDB interfaces now takes advantage of OpenSSL to
significantly improve performance if it is installed on the server. For the improvement to take effect, both the
VoltDB server and Java client library must be upgraded to the latest version. The new client library also has a
dependency on the Netty libraries io.netty:netty-all:4.1.32.Final and
io.netty:netty-tcnative-boringssl-static:2.0.20.Final . If OpenSSL is not installed or the
latest (8.3.4 or later) VoltDB client library used, the server falls back to using the builtin Java SSL
client. There was an issue where complex queries with many LEFT JOIN subclauses would consume
large quantities of heap space during the planning phase, ultimately running out of memory in the worst case.
This issue has been resolved. Note however, that such queries may still take a long time to execute once planned
and possibly exceed the query timeout limit. There was an issue in database replication (DR) where dropping a partitioned table could
cause the consumer cluster to hang. This could only happen if the cluster was not paused and DR drained before
the schema change and there was a simultaneous transaction being processed involving the table and other
non-partitioned tables. This issue has now been resolved.
|
2. Release V8.3.3 (November 21, 2018) |
2.1. | Improved Kubernetes support |
| The scripts that support running VoltDB in a Kubernetes environment have been rewritten to use ConfigMaps for
storing configuration, schema, and stored procedure classes, separating this information from the Docker image. It
is now possible to reuse a single Docker image in multiple configurations. See the readme in the
/tools/kubernetes directory where VoltDB is installed for details. |
2.2. | Additional improvement |
| The following limitation in previous versions has been resolved: VoltDB introduced two major changes in recent releases: significant
improvement in replicated table storage (V8.1) and leadership rebalancing (V8.3). One outcome of these changes
is the discovery of edge cases where multi-partition transactions stall in what is referred to as a "deadlock".
These deadlocks are rare — often only seen in lab tests. Rare as they are, any such case of a processing
failure is critical and this release fixes two known instances of multi-part deadlocks.
|
3. Release V8.3.2 (November 8, 2018) |
3.1. | New command line option for advertising an alternate DR interface |
| A new command line option for the voltdb start command, --drpublic, lets
you specify an alternate interface and (optionally) port, which the server then reports to database replication (DR)
consumer clusters. This feature is helpful for cloud environments where the internal interfaces are not accessible
from outside the hosted region, so the other DR clusters must use redirected interfaces and ports. Specify the
public interface as an IP address or host name followed by an optional colon and port number. If you do not specify
a port number, the publicly advertised port number is the same as the value for the internal
--replication port. For example: |
3.2. | Additional improvements |
| The following limitations in previous versions have been resolved: There was an issue introduced in 8.1 where, when using export and nodes failed and then
rejoined the cluster, export could lose track of a few records, causing the export connector to never completely
drain its queue. Symptoms of this bug were that the export statistics would never reach zero and attempts to use
the voltadmin commands pause --wait or shutdown --save
would hang. This issue has been resolved. There was an issue where attempting to apply a schema change involving the "time to live"
(TTL) feature through the sqlcmd utility could result in the command hanging. The root cause
was an error applying the schema change. However, the error was not reported to the user. The
sqlcmd utility now recognizes such situations, reports the error to the user, and returns to
the command prompt. At the same time, the underlying issue with the TTL schema change has been corrected.
|
4. Release V8.3.1 (October 5, 2018) |
4.1. | Additional improvements |
| The following limitations in previous versions have been resolved: There was an issue introduced in 8.3 that interfered with database replication (DR) of
multi-partition transactions. If the last query executed in a read/write transaction was a SELECT statement
rather than a data manipulation statement (such as INSERT, UPDATE, or DELETE), the transaction might not be
correctly replicated to the consumer cluster. This issue has been resolved. The syntax of the ALTER TABLE for adding a USING TTL statement to an existing table was
incorrect in 8.3. The correct syntax is ALTER TABLE table-name ADD USING
TTL... but the "ADD" keyword was missing previously. This issue has been resolved. Due to a change in the server logic for the VoltDB Management Center (VMC)
introduced in V7.9, the database replication (DR) and Import tabs no longer showed up when those features were
turned on in the database. This issue has been resolved.
|
5. Release V8.3 (September 21, 2018) |
5.1. | Leadership Rebalancing |
| In a K-safe cluster, individual nodes are assigned as leaders for each unique partition
and coordinate executing transactions for that partition on any copies within the cluster. If a node fails,
leadership can be reassigned to one of the remaining nodes. If multiple nodes fail, this means leadership for all
partitions could end up congregating on only a few nodes. Previously, leadership was not redistributed when nodes
rejoined the cluster. Now the cluster rebalances leadership of the partitions when the cluster returns to its full complement of
nodes. That is, as soon as all of the failed nodes complete the rejoin process. |
5.2. | New STARTS WITH clause optimizes text comparisons |
| There is a new clause available for SQL statements such as SELECT. The STARTS WITH clause does text
comparisons that are equivalent to LIKE with a text string ending in a percent sign (%). That is, it matches string
values starting with the specified argument. STARTS WITH is beneficial in compiled statements (such as stored
procedures) because the clause STARTS WITH ? can use indexes on the column being evaluated,
whereas an equivalent LIKE ? clause cannot. See the description of the STARTS WITH clause in the
SELECT statement in the Using VoltDB manual for details. |
5.3. | FORMAT_TIMESTAMP() function |
| TIMESTAMP values are stored and expressed in Greenwich Mean Time (GMT). The new SQL function
FORMAT_TIMESTAMP() lets you convert such values to a formatted text string in time zones other than GMT. See the
description of FORMAT_TIMESTAMP() in the
Using VoltDB manual for
details. |
5.4. | Ability to selectively restore specific tables from a snapshot |
| The voltadmin restore command now supports the --tables and
--skiptables arguments that let you either include or exclude data from specific tables when
restoring a snapshot. Note that, for an empty database, all of the tables in the snapshot schema are created. The
--tables and --skiptables arguments only control whether data is restored or
not for the specified tables. See the description of the voltadmin utility in the
Using VoltDB manual for
details. |
5.5. | New information in @Statistics DRPRODUCER output |
| The DRPRODUCER selector for the @Statistics system procedure now has an additional field. The
CONNECTION_STATUS column in the first VoltTable results tells you whether the connection to the consumer cluster is
active ("UP") or the connection is broken ("DOWN"). See the description of the DRPRODUCER return
values in the Using VoltDB manual
for details. |
5.6. | Java ByteBuffer now accepted as input to VARBINARY columns |
| Previously, the VoltDB callProcedure method accepted strings or byte arrays as input to VARBINARY columns. Now
you can also use ByteBuffer as an input datatype. |
5.7. | Security Notice |
| The following change has been made to improve security and eliminate potential threats: |
5.8. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There is a condition where, under certain conditions, if the database is idle (that is,
no read or write transactions are occurring) snapshots can get into a scheduling loop causing a CPU spike and
preventing other threads from running. This occurs only when the database is configured with a large number of
sites per host running on systems with slower disks and fewer CPU cores (for example, in virtualized
environments). To avoid this condition, a new option,
DISABLE_IMMEDIATE_SNAPSHOT_RESCHEDULING, has been added. In normal database operation, this option is not needed. However, if you configuration matches these
conditions and your database falls idle for any significant time, you can set this option to true when you start
the database to circumvent the problem. You set the option as a Java environment variable on all the servers at
start up using the VOLTDB_OPTS environment variable and including the "-D" flag. For example: Previously, if you initialized a new database root directory with a configuration that
enabled cross datacenter replication (XDCR), it was not possible to restore a snapshot after starting the new,
empty database. The problem was that XDCR creates streams for logging XDCR conflicts and those streams were seen
as an existing schema. This issue has been resolved and XDCR conflict streams are ignored for the purposes of
snapshot restore. There was an issue introduced in 8.2 that affects database replication (DR) and time to
live (TTL). If a replicated DR table is defined with TTL, the TTL delete procedure continuously generates DR
binary logs. As a result, any attempt to pause the DR consumer or perform an orderly shutdown fails since the DR
buffer never drains. Note that this issue was specific to databases with replicated tables declared as both DR
tables and with TTL. This issue has now been resolved.
|
6. Release V8.2.2 (September 17, 2018) |
6.1. | Additional improvements |
| The following limitations in previous versions have been resolved: There was an issue in VoltDB 8.2 with the elastic expansion of clusters. After adding
nodes to the cluster, it was possible for the rebalancing of the partitions to fail. This could only happen on
VoltDB 8.2 or later when the schema includes a view on a partitioned table involved in the rebalancing. The
failure could result in memory corruption, potentially leading to the cluster crashing at some future point in
time. This issue has been resolved. There was an extremely rare case where in a K-safe cluster, if a node failed and the
remaining nodes are busy, the intra-cluster failure and repair messages could be processed out of order, causing
errors in synchronization. The major difficulty with this rare error condition is that it would not be detected
or reported by the cluster at the time. However, it could result in later transactions generating a hash
mismatch. The issue was exacerbated with SSL/TLS enabled on the internal ports, since there could be a backlog
of messages from the failed host requiring decryption before being delivered. This issue has been
resolved. When loading data into variable-width columns with the bulkloader methods (either through
a utility such as csvloader or in a custom Java application), there was a memory leak that
occurred whenever the input data exceeded the maximum size of the column. This issue was specific to VARCHAR and
VARBINARY columns larger than 64 bytes and only affects VoltDB V8.1 or later. In extreme cases, where many such
exceptions occurred, the process could potentially run out of memory. This issue has been resolved. There was an issue with time to live (TTL) introduced in 8.2, where if too many records
scheduled to be deleted had the exact same value for the TTL column, TTL would fail to delete the records and
report an unexpected error condition. This issue has been resolved. There was an issue introduced in VoltDB 8.1 related to database replication (DR) and
TRUNCATE TABLE statements applied to replicated DR tables. Use of TRUNCATE TABLE on a producer cluster to clear
a replicated table could result in memory corruption and failure of the consumer cluster. This issue has been
resolved. It should be noted that use of TRUNCATE TABLE on any tables in an XDCR environment
(as opposed to passive DR) is not recommended, even with this issue resolved. The current implementation cannot
guarantee that the two XDCR clusters might not suffer an undetectable conflict since a TRUNCATE TABLE statement
does not log the specific rows that are deleted.
|
7. Release V8.2.1 (August 6, 2018) |
7.1. | Kubernetes Support |
| The software kit now includes support for running VoltDB under Kubernetes and Docker. See the read me file in
the tools/kubernetes/ folder where VoltDB is installed, or the documentation (available as
HTML and PDF). |
7.2. | Additional improvements |
| The following limitations in previous versions have been resolved: Normally, if the size of any input value exceeds the size of a variable-width column
(such as VARCHAR or VARBINARY), it generates a SQL exception and the statement is rejected. There was an issue
introduced in V8.1 where, under certain circumstances, when bulk loading data and the input exceeded the width
of a VARCHAR or VARBINARY column, a fatal exception was generated and the database stopped. This did
not happen during normal INSERT or UPDATE statements; only when using utilities such as
csvloader or using the bulkloader methods in the Java client API. It also only occurred in certain operating
system-specific environments. This issue has been resolved. Due to a change in the server logic for the VoltDB Management Center (VMC)
introduced in V7.9, the database replication (DR) and Import tabs no longer showed up when those features were
turned on in the database. This issue has been resolved. A query can have 1,025 parameters (or placeholders) at most. However, there was an issue
where entering a query with too many parameters through the @AdHoc system procedure could result in a runtime
error, crashing the database server. This includes queries entered through JDBC prepared statements, which use
@AdHoc implicitly. This issue has been resolved.
|
8. Release V8.2 (July 12, 2018) |
8.1. | New TTL feature automates deleting old data |
| A new feature, "time to live" (TTL), allows you to define an expiration timestamp for individual tables. Once
the TTL value is exceeded, the records from that table are automatically deleted from the database. This makes the
processing of streaming data easier by automating the deletion of old data. You define the expiration timestamp with the new USING TTL {value} ON COLUMN {column-name} clause in the
CREATE TABLE statement. You can also monitor the performance of TTL processing using the new TTL selector for the
@Statistics system procedure. See the documentation of CREATE TABLE and @Statistics in the Using VoltDB manual for details. |
8.2. | Support for reading the username and password from a file for the VoltDB command line
utilities |
| When using scripts to manage a secure database, the command line utilities (such as sqlcmd
and voltadmin) require a username and password. Previously, there was no easy way to do this
without either using Kerberos or hardcoding the information into the script itself. Now you can save the username
and password into a properties file — accessible only to the user running the script — and then
reference that file in the script using the new --credentials argument. See the description of
the command line utilities in the Using
VoltDB manual for details. |
8.3. | New option to create cluster-wide unique file names on file export |
| The file export connector writes export data to files on each server in a cluster. By default, the files are
unique per server, but not necessarily across the cluster as a whole. You can now set the property uniquenames to true in the export configuration to ensure
that all files are unique cluster wide. See the description of the file export connector in the Using VoltDB manual for details. |
8.4. | Improved performance when restoring snapshots |
| Restoring the database from a snapshot can take time, particularly for large databases with many views. This
release improves the performance of snapshots by storing the contents of certain views as part of the snapshot,
eliminating the need to rebuild the views on the fly when restoring the snapshot. Note that not all views can be
saved in the snapshot; views containing partitioned tables but no partitioning column in the GROUP BY clause must
still be rebuilt. Also, snapshots created on earlier versions of VoltDB will still have their views rebuilt as part
of the restoration process. But for new snapshots with views of replicated tables or partitioned tables with at
least one partitioning column in the GROUP BY column, restoring the snapshots should be noticeably faster. |
8.5. | New sqlcmd directive describes the columns in a table |
| The sqlcmd utility now supports the DESCRIBE table-name
directive. DESCRIBE lists the columns of the specified table, stream, or view and related information, such as
datatype and size. See the description of sqlcmd in the Using VoltDB manual for details. |
8.6. | Security Notice |
| The following change has been made to improve security and eliminate potential threats: Ability to "hide" the username and password for command line utilities in a separate credentials file.
(See description above.) When enabling SSL with a user-generated certificate, you need to specify both a keystore and truststore.
When using a commercial certificate a local truststore should not be needed. However, previously VoltDB still
required one. Specifying a truststore is no longer required when using a commercial certificate.
|
8.7. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was an issue with database replication (DR) where a large multi-partition
transaction could produce binary logs from each partition on the producer that fell under the 50MB limit on
inter-cluster communication. But when aggregated on the consumer for replay, the transaction exceeds the limit.
The symptom was that nodes on the cluster would report a "bad message length" exception, causing nodes to be
expelled from the cluster until the cluster itself failed. The possibility of an excessively large transaction
still exists, but now the producer cluster rejects the transaction and replication and consistency between the
clusters is maintained. There was an issue using the VoltDB Management Center (VMC) if the VoltDB http port was
set to port 80. Port 80 is the default port for web browsers and if the browser did not send a port number VMC
would incorrectly assume a default of 8080 and not operate properly. This issue has been resolved. VoltDB reserves the maximum negative value of numeric datatypes as null. (For example,
-128 for a TINY integer.) So users should not be allowed to use these reserved values in the context of a given
datatype. However, previously the compiler silently accepted such constants and interpreted them as null. This
issue has been resolved. The compiler now throws an error when evaluating numeric values equal to a datatype's
maximum negative value. Certain SQL functions (such as NOW and PI) that take no argument can be entered with or
without parentheses. However, these functions were not interpreted consistently in the selection list of a
SELECT statement. If the parentheses were left off, NOW was interpreted as the function and PI was interpreted
as a column reference. These functions are now both interpreted as functions, whether with or without
parentheses. Note: this is a slight change of behavior. If you use a column with the name PI, you will now have to
fully qualify the name to have it be interpreted as a column rather than a function. For example, in the
following statement, the first item in the section list is interpreted as the function PI and the second as the
column PI: There was a rare condition involving database replication (DR), where replication could
break if a producer cluster suffered a network partition. If the production cluster split into two segments due
to network issues, a race condition could result in the consumer cluster querying the smaller segment of the
cluster for topology information after the separation but before the smaller segment was shutdown by VoltDB's
network partition detection. If this occurred, the consumer cluster would wait for the smaller segment and fail
to poll the larger, surviving segment. This issue has been resolved.
|
9. Release V8.1.2 (June 14, 2018) |
9.1. | Recent improvement |
| The following limitation in V8.1 has been resolved: VoltDB 8.1 introduced a performance improvement to the data loading utilities and
bulkloader API. Unfortunately, this feature also introduced a potential error condition where, if the loader
encounters a runtime error, such as an input value exceeding the maximum width of a VARCHAR column, rather than
rolling back the transaction it could crash the database cluster. This issue has now been resolved. Because this
bug can cause the database to stop, we strongly recommend all customers using 8.1 or 8.1.1 upgrade to 8.1.2 at
they earliest convenience.
|
10. Release V8.1.1 (June 7, 2018) |
10.1. | Recent improvement |
| The following limitation in V8.1 has been resolved: VoltDB 8.1 introduced an issue that could interfere with the resilience of a K-safe
cluster. If a node failed while processing a multi-partition transaction, it was possible for the remaining
nodes in the cluster to suffer a deadlock. When this happened, the warning "possible multipartition transaction
deadlock detected" was reported and all subsequent multi-partition transactions would hang, along with certain
system operations such as snapshots and command log truncation. This issue has now been resolved. Customers
using V8.1 are strongly encouraged to update to V8.1.1 at their earliest convenience.
|
11. Release V8.1 (May 26, 2018) |
11.1. | Improved performance of export during schema changes |
| VoltDB now does a better job of managing the interaction of export and ongoing schema changes. Previously,
export associated with the original schema had to drain before export using the new schema could begin. Now export
from before and after a schema change are managed independently and in parallel. |
11.2. | Better memory management for replicated tables |
| The storage of replicated tables has been reorganized in this release. Previously, each partition retained a
copy of the replicated tables. Now, all of the partitions on a server share a single copy of the tables.
Applications with sizeable replicated tables or a high sites-per-host count should notice a significant reduction in
the amount of memory required by VoltDB after the upgrade. |
11.3. | Improved bulk loading of replicated tables |
| The default process for bulk loading replicated tables — either through the loader utilities such as
csvloader or through the bulkloader API — has been improved. When bulk loading data into a replicated table
using the default load procedure and performing inserts (not upserts), the load process can be as much as three
times faster, according to testing. |
11.4. | New optimization for limit/offset query performance |
| There is a limit (50 megabytes) to the amount of data any query can return. When reading large volumes of data
from a VoltDB database, use of the LIMIT and OFFSET clauses to "page" though the data is recommended. However, as
the OFFSET value increases significantly, each query can take incrementally longer to execute. This release
introduces a new feature where, if there is an index on the appropriate columns of the table, the query is optimized
eliminating the penalty associated with large offsets. |
11.5. | Improved ad hoc query performance |
| Ad hoc queries that perform read-only operations on replicated (non-partitioned) tables have traditionally
been executed by a single partition within the database, assuming such queries are a small percentage of the overall
workload. However, when they are not a small percentage, that one partition can get over
extended, resulting in increased latency. This release changes the execution model to "round robin" read-only
queries of replicated tables to more evenly distribute the workload. |
11.6. | New system procedure @Ping |
| A new procedure has been added to the list of supported system procedures for VoltDB, @Ping. It returns a
value of zero (0) if the database is up and operational. The @Ping system procedure is a lightweight procedure and
does not require any interaction between cluster nodes, which makes it a better choice than other system procedures
(such as @Statistics) if all you need to do is check if the database is running. See the description of @Ping in the Using VoltDB manual for details. |
11.7. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was an issue where the voltadmin stop node command would not
properly authenticate with the server when using Kerberos security. This issue has been resolved. There was an issue where, if a database was upgraded to a new version of VoltDB using
voltadmin shutdown --save, restarted, but then crashes unexpectedly (for example, using
kill -9), the database could not restart a second time. This issue has been resolved. Since VoltDB V7.7, it was possible for the response from a multi-partition read
transaction to be "lost" during a node failure. This could only happen on K-safe clusters, where the node that
failed was the multi-partition initiator (that is, the node responsible for coordinating multi-partition
transactions), the node failed before the transaction completed, and the procedure call was invoked on a
different node of the cluster. Under these conditions, the calling application might not
receive a response from the invocation. Note that this issue only occurred on K-safe clusters, for read-only
multi-partition queries, and for non-topology-aware clients. This issue has now been resolved. There was a sporadic problem with authentication of the VoltDB Management Center (VMC) on
slow network connections. Starting with VoltDB 7.9, the JSON interface uses stay-alive connections with a
timeout period of 10 seconds. VMC sends requests every 5 seconds. However, on slow networks the VMC calls could
be delayed beyond the timeout period, forcing the user to re-authenticate manually. The timeout period has been
extended to alleviate the unexpected timeouts. There was an issue with Kafka import where, if the database cluster was paused and then
resumed, it was possible for certain Kafka records that were being processed when the database was paused to be
lost. This issue has been resolved. There was a longstanding and somewhat obscure bug involving partitioned views joined to a
derived table from a sub SELECT statement. If a partitioned view did not include the table's partition column
and was joined to another table derived from a sub selection (that is a SELECT statement in parentheses in the
FROM clause of the main SELECT) the query could result in unexpected behavior at runtime, including possibly
crashing the database. This issue has been resolved.
|
12. Release V8.0 (February 6, 2018) |
12.1. | TLS/SSL encryption for intra-cluster communication |
| VoltDB now supports encrypting communication on the internal port, the port used for communication between
nodes in the cluster, using TLS/SSL encryption. Note that encrypting the internal port automatically adds latency to
any operations that require inter-node communication, such as K-safety and multi-partition procedures. The actual
impact depends on the configuration and application workload. It is strongly recommended you benchmark your
application before enabling internal TLS/SSL on production systems. See the chapter on "Security" in the Using VoltDB manual for details. |
12.2. | New behavior for placement groups |
| Placement groups, or rack-aware provisioning, was introduced in VoltDB 5.5. Placement groups let you specify
where each node is located, so in a virtualized K-safe environment multiple copies of a partition are distributed
onto distinct hardware, racks, etc. However, changes in VoltDB 7.0 to optimize K-safe partitioning in all cases
ended up superceding placement groups and invalidating the rack-aware positioning. This unintentional side effect has been corrected and placement groups once again provide rack-aware
provisioning. However, the algorithm for interpreting placement groups has changed. Where before you could use a
hierarchical list of names separated by periods (such as rack1.switch3.server5) the new algorithm focuses on the
first name only and subnames are largely ignored. Use of simple (non-hierarchical) placement names is recommended. In addition, the following rules apply to the
top-level names: There must be more than one top-level group specified for the cluster The same number of nodes must be included in each group The number of partition copies (that is, K+1) must be a multiple of the number of top-level groups
|
12.3. | Kafka 0.10.2 is now the default for Kafka import and kafkaloader |
| The default for the Kafka import connector and the kafkaloader command line utility has changed to support
Kafka 0.10.2 and later, including the recent 1.0.0 release. Earlier versions of Kafka (0.8.2) are still supported
through configuration options and an alternate kafkaloader8 utility. |
12.4. | Support for common table expressions |
| VoltDB now supports common table expressions, including recursive common table expressions. See the
description of the SELECT statement in
the Using VoltDB manual for
details. |
12.5. | Deprecated features removed from the product |
| The following features, that had previously been deprecated, have now been removed from the product as of
VoltDB 8.0: "Fast" read consistency (<consistency> element) Old, non-elastic partitioning (<cluster> elastic attribute) @ProcInfo Java annotation for specifying procedure partitioning Old shell commands (voltdb add, create, recover,
and rejoin)
We are also deprecating the VoltDB Deployment Manager. |
12.6. | VoltDB Deployment Manager is deprecated |
| The VoltDB Deployment Manager was designed as a console for deploying VoltDB clusters. However, it has not met
its goals for ease-of-use and flexibility. Therefore, we are deprecating it and it will be removed from the product
in a future release. In its place we recommend using one of the existing frameworks for managing distributed systems
such as (but not limited to) Chef, Puppet, Docker, and Kubernetes. |
12.7. | Security Notice |
| The following changes have been made to improve security and eliminate potential threats: |
12.8. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was an issue using the SQL Query tab of the web-based VoltDB Management Console
(VMC) to insert or filter records if any text fields in the query contained multiple consecutive spaces. (For
example, two or more leading spaces or multiple spaces between two words.) Some of the spaces were interpreted
as the UTF-8 character for a non-breaking space (\u00A0) rather than ASCII code 32, causing incorrect data
insertion or filtering. This issue has been resolved. In the past it was possible for queries containing both a FULL or RIGHT OUTER JOIN and a
GROUP BY operation on a floating point (FLOAT) column to produce incorrect results. This issue has been
resolved. Previously, certain combinations of UNION, ORDER BY, and LIMIT clauses in a single query
could produce incorrect results. This issue has been resolved. There was an issue with the Nagios script for monitoring replica clusters when using database replication (DR). The
script could generate a series of false alarms if the replica database was idle for more than two minutes. The
alarms incorrectly reported that replication was falling behind. This issue has been resolved.
|