1. Release V9.3.16 (March 7, 2023) |
1.1. | Security updates |
| Various
packages within Volt Active Data have been updated to eliminate known security vulnerabilities, including: |
1.2. | Recent improvements |
| The following limitations in previous versions have been resolved. There was a race condition where a problem pausing export connections during a schema or
configuration change could result in a deadlock. This issue has been resolved. In certain cases when attempting to shutdown a cluster, if the nodes take too long to
stop, the shutdown could fail. This issue has been resolved. In previous releases, frequent client connection attempts could result in
excessive messages in the log file, although the messages were meant to be limited to one every 60 seconds. This
issue has been resolved and the rapidly repeated messages are now muted. Under normal conditions, after elastically shrinking the cluster (that is, removing
nodes) the cluster saves a snapshot as a final step. If the snapshot accidentally starts before the nodes are
completely removed, later attempts to shrink the cluster could fail, reporting that an elastic operation is
already in progress. This issue has been resolved. There was an edge case where a voltadmin dr reset command could result
in a deadlock, causing the database to hang. The issue has been resolved.
|
2. Release V9.3.15 (November 9, 2022) |
2.1. | Log4J replaced by reload4J |
| VoltDB does not use any of the components implicated in the published CVEs related to Log4J. However, to avoid
any confusion, VoltDB has replaced the Log4J library with reload4J, a drop-in replacement that replicates the log4J
namespace and functionality, but eliminates all known security vulnerabilities. |
2.2. | @Statistics INITIATOR statistics improved |
| The goal of the @Statistics system procedure's INITIATOR selector is to help users understand and visualize
the performance of their application's queries and transactions. However, in the past, these statistics
inadvertently included data associated with import processing. To make the output of the INITIATOR statistics more
accurate, import processes are no longer included in the procedure results. |
2.3. | Recent improvements |
| The following limitations in previous versions have been resolved. There was an issue where if an export stream was dropped and recreated, then the database
was immediately shutdown and restored, the newly created export stream would have an inaccurate pointer
(associated with its previous incarnation). The consequence of this problem was that any records subsequently
inserted into the export source were never written to the associated target. This issue has been
resolved. There was a minor memory leak associated with statistics triggered by ad hoc queries.
Although normally not sufficient to even be noticed, constant and very frequent ad hoc queries (for example
thousands an hour for days) each creating a separate connection could eventually cause excessive memory usage,
slowing down the database and, in extreme cases, ultimately blocking further transactions.This issue has been
resolved.
|
3. Release V9.3.14 (July 18, 2022) |
3.1. | Recent improvements |
| The following limitations in previous versions have been resolved. In the situation where a cluster failed or was forcibly shutdown while a node
was being added or removed, attempting to restart the cluster could result in an error claiming there were
"incomplete command logs", even if command logging was not enabled. This was caused by an incomplete snapshot
left by the interrupted cluster expansion. The issue has been resolved. The timeout period associated with export block operations has been extended to avoid
erroneously timing out operations for slower export targets, such as JDBC. There was a problem in previous releases where restarting a cluster with large volumes of
unprocessed export and topic data could fail with I/O errors from too many open files. This only occurred in
extreme cases — hundreds of export connectors or topics with literally thousands of overflow files due to
their targets being down prior to the database stopping. This issue has been resolved. There was an issue where, if a stored procedure queued more than 200 SQL statements before
calling voltExecuteSQL() and at least one of the statements was a SELECT statement that
returned data, the result buffer could become corrupted causing one or more nodes to crash. This issue has been
resolved. Previously, the voltadmin release command did not always
release export on all partitions within the cluster. This issue has been resolved.
|
4. Release V9.3.13 (April 4, 2022) |
4.1. | VoltDB Management Center improvements |
| A number of functional improvements have been made to the VoltDB Management Center (VMC), including: Ability to enable and disable security in VMC Improved user management: adding and modifying users, assigning multiple roles, and support for
user-defined roles Execution of stored procedures in the SQL Query tab
|
4.2. | Recent improvements |
| The following limitations in previous versions have been resolved. There was an issue where an attempt to modify specific export characteristics of a table
with ALTER TABLE... ALTER EXPORT... ON UPDATE_NEW would result in a bad table definition in the schema that
could no longer be modified. This issue has been resolved. VoltDB uses a special prefix, VOLTDB_AUTOGEN, for indexes that are not explicitly named in
the CREATE TABLE statement. Previously, if a user defined an index explicitly using the VOLTDB_AUTOGEN prefix in
an index name, the CREATE TABLE statement would succeed. However, any subsequent attempts to modify the schema
in any way would fail. This issue has been resolved.
|
5. Release V9.3.12 (December 17, 2021) |
5.1. | Recent improvements |
| The following limitations in previous versions have been resolved. There was a rare condition where the VoltDB network process could report an
index out of bounds error, causing the cluster to hang. This condition is now caught. As a consequence of the
error, one of the nodes will stop, but the cluster as a whole will continue and not be deadlocked. There was an issue where using the CAST function to convert a VARCHAR column to a BIGINT
could generate incorrect values if the number in question had more than 18 digits. This issue has been
resolved. Additional information is now logged if the SQL compiler encounters an unexpected error
while processing a data definition language (DDL) statement.
|
6. Release V9.3.11 (October 7, 2021) |
6.1. | Additional information logged for bad message length errors |
| VoltDB constrains the size of messages sent between cluster nodes and will cancel transactions that exceed the
limit. However, in rare situations, the system itself can generate overly large messages and cause a "bad message
length" error. This release adds additional hexadecimal information to the logs when this happens, to help identify
the root cause of the error. |
7. Release V9.3.10 (August 24, 2021) |
7.1. | Recent improvements |
| The following limitations in previous versions have been resolved. Previously, if you specified an export target that did not exist (for example, by
misspelling the target name) when attempting to release the queue with the voltadmin release
command, VoltDB did not detect the error and would crash the database. This problem has been resolved and the
command now returns an appropriate error message. There was an issue with the VoltDB Management Center where, if security was enabled, the
user could not log in through the web browser. This problem has been resolved.
|
8. Release V9.3.9 (July 30, 2021) |
8.1. | Security Notice |
| The jQuery libraries used by the VoltDB Management Center have been updated to the following versions to
address security vulnerabilities: jQuery V3.5.1 jQuery UI V1.12.1 jQuery Slimscroll V1.3.8 jQuery Validate V1.19.2
|
8.2. | Recent improvement |
| The following limitation in previous versions has been resolved. |
9. Release V9.3.8 (June 23, 2021) |
9.1. | IMPORTANT: Limit partition row feature to be removed in VoltDB V11.0 |
| The LIMIT PARTITION ROWS feature was deprecated in Version 9 and will be removed in Version 11. This is a
change to the VoltDB schema syntax that is not forward compatible. This means that if your database schema still contains the LIMIT PARTITION ROWS syntax, you need to remove the
offending clause before upgrading to the upcoming major release. Fortunately, there is a simple process for doing
this. You can use the ALTER TABLE {table-name} DROP LIMIT PARTITION ROWS statement to correct the
table schema while the database is running and with no impact to the database contents. |
9.2. | New license improvements |
| This release includes a number of improvements to the licensing and management of VoltDB software. These
improvements include: Support for a new, improved license format A new voltadmin inspect command used by VoltDB product support to display summary
information about the cluster operating environment, including the current license
|
9.3. | Recent improvements |
| The following limitations in previous versions have been resolved. Previously, if the snapshot rate limit was set (using the Java property
SNAPSHOT_RATELIMIT_MEGABYTES), requesting a CSV formatted snapshot could raise an illegal argument exception
stating that "requested permits must be positive" and the resulting snapshot files would be empty. This only
affected CSV formatted snapshots. This problem has been resolved. In previous releases, there was an issue when using XDCR, where repetitive health checks on
the DR port could flood the logs with warnings and interfere with regular client connections. A similar
condition could occur when enabling SSL on the VoltDB cluster. These problems have been resolved. Recent improvements to VoltDB allow clusters to continue running in a "reduced" K-safety
mode after a hash mismatch occurs, rather than shutting down. In reduced mode the extra partition copies are
stopped to avoid any data divergence. However, in certain cases when this happened, CPU usage could eventually
spike on individual nodes in the cluster. This problem has been resolved.
|
10. Release V9.3.7 (March 12, 2021) |
10.1. | Recent improvements |
| The following limitations in previous versions have been resolved: Previously, it was possible for a final shutdown snapshot to stall due to "unacknowledged
transactions" in export. This could happen if an export stream was declared, but the associated export connector
was set to enabled="false" in the configuration. If data was then written into the stream and
a final shutdown snapshot requested (using the voltadmin shutdown --save command), the
shutdown could not finish due to the pending data in the queue. This issue has been resolved and pending data in
disabled queues is ignored. There was an issue in the cron scheduler for user-defined tasks (that is, tasks defined
using CREATE TASK ON SCHEDULE CRON...). As a consequence of the error, the tasks were always scheduled for
immediate execution. This issue has now been resolved. There was a rare condition where, if a node in a K-safe cluster failed while a snapshot
was being initiated, the cluster did not properly cleanup the aborted snapshot. As a result, no subsequent
snapshots could be started, including the snapshot needed to transfer data to the failed node when it tried to
rejoin. This issue has now been resolved. Previously, a problem could occur if a node becomes detached from the cluster (for
example, due to network issues) and does not immediately fail but times out. The result was that the remote
cluster might stop replication, reporting a "replica ahead of master" error. This issue has also been
resolved. There was a rare condition where workloads containing lots of complex multi-partition
read transactions interspersed with single-partition writes could result in a deadlock, stalling the database
and generating log messages reporting a "possible multipartition transaction deadlock". This issue has been
resolved. The snapshotconverter utility lets you generate CSV files from VoltDB
snapshot files. These files can be used to recover and reload data from individual tables through the
csvloader utility. However, for certain data — such as XDCR tables, tables defined with
MIGRATE, or views with no COUNT(*) column — the snapshotconverter
utility includes hidden columns in its output, which can be confusing. A new command flag has been added,
--filter-hidden , that lets you exclude these hidden columns from the utility's output. The Java method TaskHelper.getTaskScepe has been replaced by the method
getTaskScope . The older method is now deprecated and will be removed in a future
release. There was an issue regarding tasks and directed procedures, where modifying the class
(with LOAD CLASSES) for a directed procedure associated with a task that was already running could cause the
database to fail with an error stating that active transactions were "moving backwards". This issue has been
resolved. Previously, integer columns (such as INTEGER and BIGINT) were allowed as TTL columns.
However, they did not produce the correct results. TTL columns are now constrained to TIMESTAMP columns
only.
|
11. Release V9.3.6 (January 15, 2021) |
11.1. | Recent improvements |
| The following limitations in previous versions have been resolved: There was an issue where a stream could stop writing data to its export target after
having more than two billion rows inserted into any one partition. The problem surfaced only after the necessary
number of records (approximately 2.15 billion) were written to the export connector and the database was saved,
shutdown, restarted, and restored. After the snapshot was restored, no further records were written to the
target by the export connector. This issue has now been resolved. In fact, upgrading to this release using the standard voltadmin
shutdown --save command, installing 9.3.6, and then restarting the database will automatically
circumvent the issue. There was a rare condition where using the CAST function to convert a VARCHAR column to
an integer for numeric comparison (for example, CAST(IQ AS INT) > 140 where
IQ is a VARCHAR column) could produce an incorrect result. This would only occur if
the table containing the column had an index and that index was selected to optimize the query. This issue has
been resolved.
|
12. Release V9.3.5 (December 23, 2020) |
12.1. | Recent improvements |
| The following limitations in previous versions have been resolved: There was a problem with the Kinesis importer where the importer could fail with a "no
class found" error. This issue has been resolved. There was a rare situation where if a schema failed causing a deadlock, subsequent
attempts to rejoin nodes to the cluster would fail. This issue has been resolved. VoltDB V9.3.2 corrected several issues with the CAST() function. However, it also
introduced a new bug where using CAST() to convert a string parameter to an integer in a query requiring an
index scan could cause an error when the SQL statement was evaluated. This issue has been resolved.
|
13. Release V9.3.4 (November 13, 2020) |
13.1. | Recent improvements |
| The following limitations in previous versions have been resolved: Two issues associated with the JDBC export connector were identified and fixed.
First, when inserting into an Oracle database via the JDBC export connector, it was possible for the export
threads to get blocked if the commit failed. Second, it was possible for an insert into MySQL via the JDBC
connector to fail if the table definition required duplicate keys. These issues have now been resolved. There was an issue in the export subsystem where, it was possible that releasing an
export queue with missing records could result in more records being deleted from the queue than necessary.
Normally releasing an export queue with a gap means the export connector "jumps" to the next record after the
missing data. However, if — after the queue pauses at a gap — the database schema was updated before
the release command is issued, it was possible for additional records unaffected by the gap to be deleted from
the queue. This issue has been resolved.
|
14. Release V9.3.3 (October 21, 2020) |
14.1. | Improved performance for DR DROP command |
| The voltadmin DR DROP command removes a cluster from a cross datacenter replication
conversation. Previously, this procedure could take up to 90 seconds. The command has been optimized to
significantly reduce the time it takes to complete. |
14.2. | Recent improvements |
| The following limitations in previous versions have been resolved: An edge case was discovered that could cause a database deadlock. This situation
— a consequence of unusual failures during a schema change — has now been resolved. Issues that caused the New Relic monitoring agent to fail to start have been
resolved. VoltDB V9.2 introduced an optimization that could result in different execution plans,
and therefore different results, for certain queries involving multiple CAST operations. This issue has been
resolved. The snapshotconvert utility has been corrected to interpret null
values as end-of-file, rather than reporting an error. At the same time, general error handling has been
enhanced and extended to report more detailed information when a failure occurs.
|
15. Release V9.3.2 (July 7 2020) |
15.1. | Support for RHEL and CentOS version 8 |
| VoltDB has completed testing and qualification of the Red Hat Enterprise and CentOS version 8 operating
systems as supported platforms for running VoltDB in development and production. |
15.2. | Recent improvements |
| The following limitations in previous versions have been resolved: Save and Restore: There was a problem where using the VoltDB Management Center to restore a snapshot would
not work. Using the Restore button on the web interface would actually start two restore
operations, which would then cause constraint violations. This issue has been resolved. Using VoltDB V6.5, taking a snapshot of a cluster with database replication (DR) enabled
could create a snapshot that cannot be restored in later versions of VoltDB (V7, V8 or V9). The restore
operation would fail with an error. This issue has been resolved and the current version of VoltDB can now
restore the problematic snapshots. There was an issue where restoring a snapshot, either manually or
automatically on startup, could fail if the directory contained multiple snapshots and the unique IDs for the
snapshots were similar. (That is, one ID matched the starting characters of another ID, such as
test and testme.) When this happened, the restore would
fail with an error stating that a table had an "inconsistent transaction ID." This issue has been resolved.
VoltDB now performs an exact match on the selected snapshot's unique identifier.
Monitoring: The New Relic latency graph data has been adjusted to improve accuracy. The New Relic node count report was accurate while the cluster was running, but did not
"zero out" for periods while the cluster was down. This report now reports zero running nodes for those
intervals when the cluster was stopped.
Export: There was an issue where a schema or configuration change on a running database could
stall if there was an active export queue connected to a JDBC target. Closing the export connection took too
long and could potentially cause a deadlock with the requested change. This issue has been resolved. VoltDB stores data for export queues on disk in the export overflow directory. Normally,
these files are deleted shortly after the data is received by the export target. However, recent releases did
not always remove queue files after they were completed. These extraneous files did not impact database
performance, but did occupy unnecessary disk space. This issue has now been resolved.
Other: There was a memory leak, caused by changes in low-level memory management in Java 11,
where clusters running on Java 11 and utilizing export and/or database replication would slowly accrue
unassigned yet unreleased memory, until ultimately they could run out of memory altogether. This issue has been
resolved. There was a rare case where, if a replica cluster in passive DR is promoted and then a
node fails, other nodes could subsequently fail when the cluster tries to reassign partition leadership. The
symptom for this particular failure scenario was that the failing nodes reported an error processing an
invocation dispatcher request. This issue has now been resolved. Under certain circumstances, the voltadmin show license command would
generate a Python stack trace when evaluating the expiration date. This issue has been resolved.
|
16. Release V9.3.1 (May 1, 2020) |
The current release finalizes two beta features introduced in 9.2, scheduled tasks and configuration of flush
intervals. In addition, this release includes the following new features and improvements. |
16.1. | Export Improvements |
| The export subsystem has been rewritten to provide significant improvements in both performance and
reliability, as well as accommodate planned future enhancements. The new subsystem is available to all customers
using the VoltDB Enterprise Edition. The key advantages of the new export subsystem are: Better throughput — Initial performance tests demonstrate
significantly better throughput on export queues using the new subsystem over previous versions of
VoltDB. Adjustable thread pools — The new subsystem let's you set the
thread pool size for export as a whole or to define thread pools for individual connectors. Fewer duplicate rows — When cluster nodes fail and rejoin the
cluster, the export subsystem resubmits certain rows to ensure they are delivered. The new subsystem keeps
better track of the acknowledged rows and does not need to send as many duplicates to maintain the same level of
durability.
|
16.2. | Custom tasks |
| VoltDB 9.2 introduced tasks, which let you schedule stored procedures for execution on a repeating schedule.
Tasks are now complete and ready for production use. In addition, this release extends support to include
custom tasks, where you can dynamically adjust the procedure called, the parameters to that
procedure, the interval between calls, or any combination of the three based on the results of the previous
invocation. You write custom tasks as Java classes that set the attributes of the next task run and identify a
callback method to invoke once the procedure completes. See the chapter on custom tasks in the VoltDB Guide to Performance and Customization for
details. There is also a sample custom task
available in the VoltDB github repository. |
16.3. | Thread pools |
| The VoltDB Enterprise Edition now lets you control the thread pools used to execute the export subsystem. A
thread pool defines the number of threads used to run processes concurrently. The more threads available, the more
concurrent processes and therefore the more throughput. However, more threads means more system resources are
consumed. Thread pools let you tune the export subsystem to balance throughput and resource utilization against your
application requirements. You specify the thread pools in the database configuration file. You can define a default thread pool for export connectors, using the defaultpoolsize attribute to the <export>
element. You can also assign specific pool sizes to individual export connectors by defining a named thread pool in
the <threadpools> element and assigning it to a connector using the
threadpool attribute of the <configuration> element. For example: |
16.4. | Improved handling of non-deterministic procedures |
| When using K-safety, it is important that stored procedures produce deterministic results so multiple copies
of a partition can run transactions concurrently with predictable results. Without consistent results in every
partition, the data could diverge. To avoid this, traditionally VoltDB would stop the database if a
non-deterministic result was detected (referred to as a "hash mismatch" in the log file and error messages). Writing deterministic stored procedures is still important. But VoltDB now takes less disruptive action when
divergence is detected. Rather than stopping the database every time a mismatch is detected, in most cases VoltDB
now shuts down the extra copies of the partitions and runs in a single-copy reduced mode, eliminating the
possibility of divergence. Of course, in the reduced mode, the cluster is no longer K-safe, which should be
corrected as soon as possible. But until the offending procedure can be replaced and the cluster restarted, the
database will continue to run and continue to process transactions. |
16.5. | Large schema |
| VoltDB easily handles databases with large numbers of tables. However, the schema also contains any stored
procedures and auxiliary class JAR files loaded into the database. For databases using extremely large JAR files
— most noticeably databases including machine learning (ML) models — it was possible to exceed the
database's 50MB limit for the schema. VoltDB has now been rewritten to accommodate schema of arbitrary size. Note, however, that limitations on the
size of individual tables (that is, no more than 1,024 columns per table and not to exceed 2MB for each table) are
still in effect. |
16.6. | Java 11 performance testing |
| In addition to the reliability tests that are run on an ongoing basis, Java 11 was run through a series of
performance tests to assess its impact on VoltDB, with a particular focus on the different garbage collection
schemes. It turns out that all of the garbage collectors available in Java 11 perform as well or better than
previous versions, in several tests notably reducing the size and frequency of GC pauses. |
16.7. | Improved performance for schema changes |
| Schema changes (and configuration updates) must be treated as multi-partition transactions in VoltDB. As a
result, frequent schema changes can impact ongoing throughput and latency. This is particularly noticeable for large
schema. Version 9.3 provides performance improvements to reduce the time required to apply schema changes and, as a
result, reduce the impact on application latency. |
16.8. | New Relic enhancements |
| The current release improves the content and structure of VoltDB performance and management data provided to
the New Relic monitoring suite. |
16.9. | SNMP improvements |
| All SNMP traps are now recorded in the log file. Previously, SNMP traps were sent, but not recorded. Log
entries provide additional confirmation that SNMP events are triggered. |
16.10. | LIMIT PARTITION ROWS deprecated |
| The LIMIT PARTITION ROWS clause of the CREATE TABLE statement is being deprecated. LIMIT PARTITION ROWS was
designed to avoid data overflowing the available space in the database. However, it was an all-or-nothing setting:
once you exceeded the limit the database was automatically paused. Transactions could not continue until you
manually reduced the row count. The feature itself did nothing to resolve the situation. LIMIT PARTITION ROWS is being deprecated and replaced by two significantly better capabilities now available
in VoltDB. USING TTL lets you automatically and incrementally purge old data from tables and scheduled tasks let you
build complex responsive algorithms for monitoring, managing, and resolving data volumes. Although still available in V9.3, customers should replace any use of the LIMIT PARTITION ROWS clause at their
earliest convenience, because the syntax will be removed from the product in a future major release. |
16.11. | Old export API deprecated |
| In VoltDB V8.0, the interface for writing custom export clients was updated and replaced with a new API. To
accommodate old clients, the export system uses the method signatures to distinguish between the old and new
interface. However, the old interface is now being deprecated and support will be removed in the next major
release. Any custom clients that use the old signature, where the onBlockStart() andonBlockCompletion() accept no
arguments and processRow() expects two, should be updated to use the new interface. See the chapter on custom export and import in the
VoltDB Guide to Performance and
Customization for information on the new interface. |
16.12. | RabbitMQ support is deprecated |
| The export connector for RabbitMQ is now deprecated and will be removed in a future release. |
16.13. | Security Notice |
| The following libraries used by VoltDB have been updated to ensure the latest security and performance patches
are applied: Commons-compress 1.19 Dom4j 2.1.1 Jackson 2.9.10 Jetty 3.27 Kafka client 0.10.2.2 Netty 4.1.43 Openssl 1.0.2t Tomcat 7.0.96
|
16.14. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: Previously, there was a rare case where an ad hoc query using parameters in a complex
combination of CAST() and SUM() operations would give different results than the same query using constant
values. For example, the following fragment with the arguments "A" and 1
gave different results than expected if the placeholders were replaced by the SQL constants
'A' and 1 : This issue has been resolved. Under certain rare conditions, if a node failed on a K-safe cluster while a node was
being added, a snapshot was in progress, or the @SwapTables system procedure was executing, the command logs for
that cluster could be rendered incomplete. If, after this event, the cluster stopped and restarted before the
next command log snapshot was taken, the command logs could not be replayed beyond the point of the node
failure. This issue has now been resolved. Export data optionally contains six columns of metadata, including a timestamp
identifying when the row was exported. Previously, this timestamp was mistakenly set 12 years prior to the
actual date. This issue has been resolved. There was an issue where multiple left joins, under certain circumstances and in a
particular order, could result in an error where VoltDB reports that it is "unable to resolve a column index for
join TVE." This issue has been resolved. Previously, if a GROUP BY query included an arithmetic expression combining an aggregate
function and a parameter cast to a specific datatype with the CAST() function, the query failed to compile. For
example: This issue has been resolved for the CAST() function. However, combining other parameterized functions
with an aggregate may also cause a compilation error. The workaround, until the more general case is fixed, is
to put the parameterized function in a subquery. Internal stress testing uncovered an extremely rare edge case where, in a K-safe cluster
configured for export, if a node stops and rejoins, then during the rejoin while it is being reassigned
mastership of a partition the node fails again, the cluster could crash with an error indicating a "duplicate
counter collision." This condition has never been reported in the field but has now been resolved. There was an issue where SNMP traps would fail if the server had been running for around
24 days. After 24 days, whenever an SNMP trap was triggered, VoltDB would report that the trap failed with an
illegal argument exception error. This issue has been resolved.
|
17. Release V9.2.2 (December 12, 2019) |
17.1. | Maven improvements |
| The packaging of VoltDB in the Maven Central Repository has been adjusted to ensure the correct artifacts are
provided. |
18. Release V9.2.1 (December 8, 2019) |
18.1. | Improved handling of SSL/TLS connections in the JDBC interface |
| The handling of secure export connections (using SSL/TLS) through the JDBC interface has been improved.
Specifically, the requirement for a truststore when using a commercial certificate has been removed. |
19. Release V9.2 (October 28, 2019) |
This section describes new features in VoltDB V9.2 and known issues that have been fixed. Several new features are
identified as beta software. Beta software means that the features are fully functional but have not received sufficient
real-world usage or integration testing to ensure production readiness. We do not recommend using Beta features in
production. However, we encourage you to try them and provide feedback on their usefulness to your business needs. Thank
you. |
19.1. | Change to Kafka export default behavior |
| The Kafka acks property determines whether VoltDB waits for
acknowledgement of receipt from the Kafka brokers. Previously, the default was set to "1" (one), but the
recommendation was to set it to "all" to protect against the loss of records if the Kafka brokers fail. The default
has changed to match the recommended setting. Existing customers who use Kafka export but do not explicitly set the acks property may notice a slight change in export latency. The new default of "all" is
the recommended setting. However, if you are willing to accept less durability on the part of the Kafka brokers, you
can explicitly set the property back to "1" to replicate previous behavior. |
19.2. | Export from tables ready for general use |
| VoltDB 9.1 introduced two new beta features: the ability to export data directly from tables using the CREATE
TABLE... EXPORT TO TARGET statement and the MIGRATE statement to simplify the export and deletion of records. These
features are now fully supported for production use. |
19.3. | Scheduling stored procedures as repetitive tasks (BETA) |
| VoltDB 9.2 introduces a new feature, scheduled tasks. Tasks let you schedule the repeated execution of stored
procedures at a set interval or using a cron-style declaration. You schedule tasks using the CREATE TASK statement.
There is also a corresponding @Statistics selector, TASK, and ALTER TASK and DROP TASK statements. See the section
on scheduling tasks or the reference pages describing each statement in the Using VoltDB manual for details. |
19.4. | Directed procedures for distributing transactions to every partition |
| One aspect of scheduling tasks is defining how they run; that is, as a single multi-partition transaction or
as separate transactions on each partition. To support the latter execution model, a new type of stored procedure is
being introduced, the directed procedure. Directed procedures are partitioned procedures in
that each instance of the procedure runs on a separate partition. However, directed procedures do not have a
specific partitioning value. You use the CREATE TASK statement or the Java
callAllPartitionProcedure method to have a separate instance of the procedure run on every
partition of the database. See the section on directed procedures in the Using VoltDB manual for details. |
19.5. | New Export tab and statistics in VMC |
| The VoltDB Management Center (VMC) has a new tab, Export, which provides enhanced statistics and graphs that
help analyze the performance of export connectors. |
19.6. | User-defined aggregate functions (BETA) |
| The CREATE FUNCTION statement lets you declare a user-defined scalar function. Starting with VoltDB 9.2, you
can develop and declare user-defined aggregate functions as well using the CREATE AGGREGATE FUNCTION statement.
Aggregate functions process data from multiple rows and return a single aggregated value. See the chapter on user-defined functions in the VoltDB Guide to Performance and Customization for
details. |
19.7. | Ability to query, filter and merge statistics using SQL (BETA) |
| You can now query statistics from the VoltDB @Statistics system procedure as if the results were SQL tables.
You can use the querystats directive in sqlcmd or the new @QueryStats system procedure specifying
the column names from the @Statistics results in the selection expression. For example, the following
sqlcmd command aggregates the row count from all of the partitions for each table using the TABLE
selector: Note that not all SQL syntax is supported as input to the querystats parser and for the initial release any
syntax errors are reported in the server log but not reported to the user's console. See the descriptions of sqlcmd or the @QueryStats system procedure in the
Using VoltDB manual for further
details. |
19.8. | Configurable flush intervals for DR and export buffers (BETA) |
| You can now control how frequently the buffers for database replication (DR) and export are flushed. Normally,
DR and export data is buffered until a certain amount of data is ready and then the data is sent as a batch. To
avoid small amounts of data lingering in the buffer, there is a time limit after which, the data is sent even if the
full batch size has not been reached. This time limit is called the "flush interval". You can now control these
settings in the configuration file by setting a system-wide minimum and separate flush intervals for DR and export.
See the VoltDB Administrator's Guide
for details on configuring flush
intervals. |
19.9. | Undocumented feature deprecated |
| In conjunction with the new configuration options for flushing buffers, an older undocumented attribute for
setting the DR flush interval is being deprecated. The flushInterval attribute of the <dr> element is
deprecated and will be removed in a future major release. In the meantime, this attribute will continue to operate
and will supersede the new settings and defaults, so as not to change existing behavior for any customers who may
have used this feature. However, those users are strongly encouraged to switch to the new, supported feature at
their earliest convenience to avoid problems when the deprecated attribute is removed from the product in the
future. |
19.10. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was an issue where an attempt to reduce the size of a running cluster using the
voltadmin resize command would fail with a plan fragment error if the cluster had an
enterprise license but without support for database replication (DR). DR is not required
for cluster resizing to work and this issue has been resolved. There was issue where, in rare cases, if a schema update or configuration change failed,
subsequent attempts to update the schema or configuration would also fail. This could only happen if the
original update failed with an unhandled exception (such as a reference to a missing Java method), at which
point subsequent update attempts reported that another update was still in progress. This issue has been
resolved. Previously, there was the rare possibility that a node failing in a K-safe cluster could
cause a multi-partition deadlock forcing the cluster to stall. This issue has been resolved. There was an issue with the loopback export connector. If a table or stream was declared
as exporting to a target associated with the loopback connector, any attempt to alter the table definition would
result in the schema change failing due to a timeout. This issue has been resolved. Beginning with V9.0, VoltDB supported the use of Java 11 for running servers. However,
changes in Java 11 could cause VoltDB to incrementally leak memory until all available resources are exhausted.
This issue has now been resolved. We strongly recommend that customers using VoltDB with Java 11 upgrade to
V9.2.
|
20. Release V9.1.1 (September 3, 2019) |
20.1. | Licensing change for the VoltDB client in Maven. |
| The license for the VoltDB JAR file in Maven has been changed from AGPL to an MIT license. |
21. Release V9.1 (August 8, 2019) |
21.1. | Reduce the size of a running cluster. |
| Previously, you could expand a running cluster to increase capacity by starting the new server with the
voltdb start --add command. However, until now there was no way to reduce the cluster size
without stopping and reconfiguring the database. With the introduction of the voltadmin resize
command, you can now elastically shrink a running cluster as well. The voltadmin resize command
tests the cluster to make sure it can be reduced in size, tells you which nodes will be removed, and then starts the
resize process. See the section on "Removing Nodes with Elastic
Scaling" in the Using VoltDB
manual for details on reducing the size of a running VoltDB cluster. |
21.2. | MIGRATE TO TARGET finishes beta testing and is ready for production use. |
| VoltDB V9.0 introduced migration as a beta feature, making it possible to automate the data lifecycle by
migrating data to an external resource before deleting it from the VoltDB database. With V9.1, the migration feature
is now fully integrated, tested, and extended to round out its capabilities. WarningThe syntax for the CREATE TABLE statement has changed from the original beta release. For consistency and to
allow use without TTL, the MIGRATE TO TARGET clause now appears after the table name and before the column
definitions, rather than after the USING TTL clause. If you used the MIGRATE TO TARGET clause during the beta test
period, you will need to modify your schema and reload your database after installing V9.1. See the description of the CREATE
TABLE statement in the Using
VoltDB manual for details on automating the migration of data to external targets. |
21.3. | New MIGRATE statement. |
| Using the MIGRATE TO TARGET clause with USING TTL automates the export of data to an external target before
the data is deleted. In addition, you can now use the MIGRATE TO TARGET clause by itself — without the USING
TTL clause — for situations where you want to manually control when the data is migrated. To do this, you can
use the new MIGRATE statement to manually initiate the migration during a transaction. Note that you can also use
the MIGRATE to statement for tables defined with both MIGRATE TO TARGET and USING TTL, in which case an explicit
MIGRATE statement can preemptively migrate the data before the TTL value is reached. See the description of the
MIGRATE statement in the
Using VoltDB manual for
details. |
21.4. | EXPORT TO TARGET for exporting data directly from tables (Beta Feature). |
| VoltDB 9.1 introduces a new feature that makes it possible to connect tables (not just streams) to export
targets. If a table is declared with the EXPORT TO TARGET clause, just like a stream, any data inserted into the
table is passed to the export connector. This simplifies applications that want to stream incoming data to external
systems, where previously there had to be both a table and a matching stream. WarningCREATE TABLE... EXPORT TO TARGET is a beta feature. All functionality is believed to be complete as
described. However, it is possible individual aspects of the feature may change before it is deemed production
ready. For that reason export directly from tables is not recommended for production use at
this time. However, we encourage you to try it in development and welcome feedback on its usefulness. For example, the following table declaration allows any data inserted into the
Alerts table to also be exported to the Messagelog
target: You can also customize which events trigger export. Triggering events include inserts, updates (both before
and after the update) and deletes. By default, only inserts trigger export; you can specify a different list of
triggers with the ON clause: See the description of the CREATE
TABLE statement in the Using
VoltDB manual for more information about associating tables with export targets. |
21.5. | Improved handling of multi-partition transactions with large intermediate result sets |
| VoltDB limits each transaction to 50 MB of results. In fact, each transaction fragment is limited to 50MB.
However, for multi-partition transactions, this means each partition can return up to 50MB of data to the
coordinator. For large clusters with many unique partitions, it is possible for all this data to exceed the
allocated Java heap for the coordinator, causing that node to fail with an out of memory error. To avoid this situation, VoltDB now limits the amount of data each fragment can return, based on the current
maximum heap size and number of partitions. By default, each partition in a multi-partition transaction is only
allowed to return the lesser of 65% of the maximum heap size divided by the number of unique partitions
(sitesperhost * number of nodes / k+1 ) or 50MB. The limit is further reduced for read-only multi-partition
transactions to accommodate for the fact that multiple read-only transactions can be run simultaneously. If, at
runtime, a fragment exceeds the limit, it throws an exception and the transaction rolls back gracefully. You can adjust this per-partition multi-partition response limit by setting the environment variable
MP_MAX_TOTAL_RESP_SIZE. You can either set it as the percentage of max heap to use in the calculation (by using the
percent sign) or as a specific number of bytes (by using an integer value with no suffix). For example, to allow a
maximum of only 50% of the allowable heap size, you can set the following environment variable before starting the
server: You can set MP_MAX_TOTAL_RESP_SIZE as an environment variable or as a Java system property through
VOLTDB_OPTS: |
21.6. | New command to show license information. |
| There is a new command, voltadmin show license, that lists information about the current
license in use by a running VoltDB cluster. You can get similar information programmatically using the
@SystemInformation system procedure with the LICENSE selector. |
21.7. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was a rare edge case where a query could return incomplete results. If an index on
a table included two columns and the WHERE clause of a query on the table included both an IN clause applied to
one column and a less than or equal to (<=) evaluation of the second column, fewer than expected rows were
selected. This issue has been resolved. Previously, it was not possible to use a table alias in a DELETE statement. This issue
has been resolved. The V9.0 Enterprise kit accidentally left out the scripts necessary for running VoltDB in
a Kubernetes environment. This issue has been resolved. Previously, the ALTER TABLE statement did not accept the ALTER keyword before USING TTL
when altering a TTL definition. This issue has been resolved and the ALTER TABLE now requires ALTER before USING
TTL. For example: There was an issue related to JDBC export when using the
ignoregenerations=false property. When generations are not ignored,
VoltDB is supposed to create a new table name each time the schema changes. However, starting with VoltDB V8.4,
JDBC export could create new tables when nodes failed or the database restarted, even if the schema did not
change. This issue has been resolved. In a related JDBC export issue, if there is a schema change to an export stream
associated with a JDBC target while the target is disabled, the export connector could fail, stopping export,
the next time a record was inserted into the stream. This issue has been resolved. There was an issue with file export when using TSV (tab-separated value) format and
exporting data containing quotations marks or backslashes. The export connector incorrectly attempted to
"escape" the output, although TSV format does not support quoting or escaping. The result was incorrect output
in the export file. This issue has been resolved. The export connector no longer attempts to escape special
characters in the output. Using the VoltDB V9.0 Java client JAR with Java version 11 could result in a
run-time error when deallocating a direct byte buffer. This issue did not affect the full server JAR file and
has now been resolved. Previously, if there was an index on a table where the index uses a function that
could fail (due to an invalid input value, for example), in some instances inserting a row into the table could
succeed even if the update to the index failed. This issue has been resolved and now a failed index update
causes the table insert to fail as well. VoltDB 8.2 introduced an issue that could cause a cluster to hang or crash when
attempting to add nodes using elastic scaling. Under certain conditions, where a database has views, it was
possible for the cluster to hang or to crash reporting a deserialization error while attempting to pass copies
of the current database contents to the joining nodes. This issue has been resolved. There was an issue where changing the Kafka export configuration on a running
database could cause the update to hang. The issue was triggered by a Kafka export configuration specifying an
invalid Kafka broker. Attempting to update the configuration to change the invalid broker specification would
cause the update process to hang. This issue has been resolved. There was an issue introduced in VoltDB 9.0 that affects certain views on streams. If the
view definition included a function, it was possible for the CREATE VIEW statement to return an error stating
that VoltDB could not get the row count from native storage. This issue has been resolved. There was an edge case related to indexes that could cause a VoltDB database to crash. If
the index was defined with STARTS WITH or LIKE in the WHERE clause and a record was inserted into the table
resulting in a value of a datatype other than VARCHAR being evaluated, the operation would fail, bringing down
the database. This issue has been resolved and use of these expressions in indexes are now protected against
illegal datatype casts. In a related issue, if the WHERE clause of a partial index definition includes a column
reference as an argument to a function inside an expression (such as LIKE), the index could fail when the table
is updated, crashing the database. This issue has been resolved. When fetching column values from a VoltTable using Java, you can either use a column
index or the column name. Previously, using the column name for the lookup was significantly slower than using
the column index. This code has been optimized to significantly improve lookups by name, minimizing the
difference between column index and name lookups. VoltDB V9.0 introduced several new features associated with export. In addition to
further extending export functionality, a number of edge cases associated with the durability of export queues
during unusual system failure scenarios were identified and resolved in the current release. There was an issue in recent versions of VoltDB where after restoring a database
from a snapshot, the export statistics reported by the @Statistics system procedure could be inaccurate, because
the export sequence number was not correctly reset. This did not affect the actual database contents being
exported. However, the incorrect sequence number could also appear in the export metadata columns. This issue
has been resolved. Under certain conditions, recent versions of VoltDB could fill the logs with repeated
warnings that it "received export message x for partition y... which
does not exist on this node" after a rejoin, recovery, or elastically adding a node. These messages did not
indicate any real problem with the database or export and have now been rate limited and demoted to
informational messages. There was an edge case where attempting to update the schema while a snapshot is being
saved could hang the database. This issue has been resolved. Attempts to change the schema or configuration
during a snapshot are no longer allowed and must wait until the snapshot is complete. There was an issue with elastic expansion of a cluster. If, while adding nodes "on the
fly", ongoing transactions within one of the partitions being rebalanced generates a constraint violation, it
could result in the rebalance operation reporting a fatal "failed to delete tuple" error, causing the server
process to exit. This issue has now been resolved. Under certain rare error conditions, an attempt to elastically add nodes to a cluster
could fail, crashing the cluster and reporting an illegal argument exception. This issue has been
resolved. There was an issue where, if database replication (DR) encountered a corrupt file in the
overflow directory, it reported an error "retrieving invocation buffer from disk." Unfortunately, it did not
resolve the issue and as a result logged this error repeatedly, flooding the log file. VoltDB now identifies and
addresses bad overflow files as part of its startup behavior. Previously, the bulkloader interface, which is used by VoltDB data utilities such as
csvloader and is available through the Java API, did not correctly account for the additional data structures
required by cross-data center replication (XDCR) or TTL with migrate. As a result, attempting to bulk load data
into an XDCR cluster or a table with MIGRATE TO TARGET and USING TTL could cause the cluster to crash. This
issue has been resolved.
|
22. Release V9.0 (April 11, 2019) |
22.1. | Updated operating system and software requirements |
| The requirements on the underlying operating system and software environment have been updated for VoltDB
V9.0. The older CentOS and RHEL version 6.6 is no longer supported and Java 11 support has been added. In addition,
support for kafka 0.8.2 has been dropped and Kafka import and export now requires version 0.10.2 or later. See the
VoltDB Administrator's Guide for
details on the platform requirements for running VoltDB clusters. |
22.2. | Support for Java 11 |
| VoltDB now supports both Java 8 and Java 11. |
22.3. | Automated Deletion of Old Data |
| VoltDB 8.4 introduced a new feature, USING TTL ("time to live"), that lets you define when records expire and
can be deleted. This feature simplifies application design by automatically removing old data from the database
based on settings you define in the table schema. With VoltDB 9.0, this feature is extended to include the migration
of deleted data to other systems for archival purposes, as described next. |
22.4. | New Export Capabilities |
| The code that supports export of data to external systems has been rewritten to provide flexibility, improve
reliability, reduce system resource utilization, and support new and future product features. The new export system
reinforces the durability of data queued to the export connectors across unexpected system and network failures and
allows export to be extended to add new capabilities. The first two new capabilities are: ALTER STREAM — The ability to modify an existing stream. You can
use the new ALTER STREAM statement to modify the schema of the stream or the target for export without
interrupting any already queued export data. See the description of ALTER STREAM in the Using VoltDB manual for details. Automated Data Migration — You can now automate the export of data
from VoltDB database tables to other systems as part of the data aging process. For tables declared with the
USING TTL clause you can now add a MIGRATE TO TARGET clause. With MIGRATE TO TARGET, data that exceeds its "time
to live" is queued to the specified export connector. Once the data is exported and acknowledged by the external
system, it is then deleted from the VoltDB database. See the description of the CREATE TABLE statement in the
Using VoltDB manual for more
information about the USING TTL and MIGRATE TO TARGET options.
Export now starts when the stream is defined, not when the target is defined. Previously, stream data was not
queued for export until a valid export connector was configured and connected. Starting with VoltDB 9.0, data
written to streams declared with the EXPORT TO TARGET clause are queued for export whether the target is configured
or not. Similarly, the queued data is removed as soon as the stream itself is removed with the DROP STREAM
statement. Also, export is now an enterprise feature. The VoltDB Community Edition provides access to two streams per
database, so users have access to basic export functionality. But for unlimited access to export and migration
features, the Enterprise Edition is required. |
22.5. | "Live" Schema Updates with Database Replication |
| Previously, database replication (DR) required the schema of the cooperating databases to match for all DR
tables. So updating the schema required a pause while all of the affected databases were updated. Starting with 9.0,
this limitation has been loosened. DR continues even if the schema are different. So it is possible to update the
schema without interrupting ongoing transactions. Of course, it is not possible for VoltDB to resolve individual transactions if the schema differ. So if a DR
consumer (either a replica in passive DR or an XDCR cluster in active replication) receives a binary log where the
schema of the affected table(s) does not match, DR will stall and wait for the schema to be updated to match the
incoming data. Therefore, care must be taken when updating the schema to ensure that no transactions that are
affected by the schema change are processed during the interval when the clusters' schema do not match. See the
sections on updating DR schema for passive and active DR for more
information. |
22.6. | Simplified JSON interface |
| A new version of the VoltDB JSON API, 2.0, is now available. The original JSON interface provides complete
information about the schema for the data being returned, including separate entries for the data, the column names,
and datatypes. The 2.0 API returns a much more compact result set with each row represented by an associative array
with each element consisting of the column name and value. |
22.7. | New @Statistics selector IDLETIME |
| There was an undocumented feature of the @Statistics system procedure that reported on how busy the execution
queues for the individual partitions are. This data is now supported as the IDLETIME selector. See the description
of the @Statistics system procedure
in the Using VoltDB manual for
details. |
22.8. | JVM stats automatically disabled |
| The I/O activity that JVM stats generates can interface with performance for applications like VoltDB, to the
point where it can cause cluster nodes to disconnect. VoltDB now automatically disables JVM stats if /tmp is not
defined as tmpfs (temporary in-memory storage). |
22.9. | Changes to how export streams are reported |
| Export streams (that is streams defined with the EXPORT TO TARGET clause) are no longer reported under the
TABLE statistics of the @Statistics system procedure. Export streams are now reported under the EXPORT selector and
tables and streams without export are reported under TABLE. |
22.10. | Support for multiple schema and classes files when initializing the database
root |
| The voltdb init command now allows you to specify multiple files as arguments to the
--schema and --classes flags. Separate multiple files with commas. You can
also use the asterisk (*) as a wildcard character. For example, the following command initializes a root directory
with two schema files, plus all the JAR files from one folder and another JAR file from the current working
directory: It is also possible to specify multiple schema and classes files when configuring VoltDB for use in
Kubernetes. See in the readme file in the tools/kubernetes/ subfolder where VoltDB is installed
for details. |
22.11. | Log4J logger JOIN has been renamed to ELASTIC |
| The Log4J logger for elastic operations such as adding new cluster nodes on the fly has been renamed from JOIN
to ELASTIC |
22.12. | Security Notice |
| The following change has been made to improve security and eliminate potential threats: The Kafka import and export connectors now support the use of Kerberos
authentication. Previously, you could not enable SSL/TLS and Kerberos authentication at the same time.
This limitation has been removed.
|
22.13. | Additional improvements |
| In addition to the new features and capabilities described above, the following limitations in previous
versions have been resolved: There was an obscure issue where if a database cluster was restored from snapshots
multiple times, the fourth restore command could stall and eventually timeout. For this unusual situation to
occur the database schema must contain views, must be K safe, and nodes must have stopped and rejoined between
each snapshot restore operation. This issue has been resolved. When rejoining a node to a running cluster , the system clock on the rejoining node must
be within the limits for clock skew on the cluster, just like when starting the cluster for the first time. If
not, the rejoin operation will fail. Previously, there was an issue where if a rejoin failed due to clock skew,
subsequent attempts to rejoin nodes would fail even if the clock skew had been corrected. This issue has been
resolved. Previously, when using database replication, if you dropped a DR table using the DROP
TABLE statement and then recreated a table with the same name using CREATE TABLE, the new table was treated as a
DR table, even though it had not been declared as such. This issue has been resolved. The new export infrastructure corrects a number of issues related to the management
of export queues across planned and unplanned cluster operations such as shutdowns, restarts, node failures and
rejoins, and configuration changes. Although rare, these issues tended to fall into the category of export data
not draining or excessive numbers of duplicate records being exported. These issue are resolved as part of the
infrastructure redesign. There was an issue where if a graceful shutdown operation was interrupted (for example,
by a CTRL-C on the voltadmin shutdown command), a subsequent voltadmin shutdown
--force command would fail. This issue has been resolved. It was possible for frequent schema changes to interfere with a node's attempt to rejoin
the cluster. When this happened the rejoin operation would time out, reporting that the cluster could not send
data to the rejoining node for more than 60 seconds. This issue has been resolved. Previously, attempting to restore a snapshot created on a standalone cluster to a cluster
configured for cross datacenter replication (XDCR) would fail with a misleading error message (indicating that
the configuration could not be updated). This issue has been resolved, the snapshot is restored, and no error is
reported. Previously, if you created a table with no columns on a cluster with command logging
enabled, the cluster would crash when it attempted to truncate the command logs. This issue has been
resolved. There was an issue introduced in VoltDB 8.2 when the USING TTL clause was added to allow
you to automatically delete old records from tables based on the specified column value. Accidentally the USING
TTL clause was also allowed on CREATE STREAM statements, although it has no application to streams. This issue
has now been resolved. Under certain rare conditions, the @Quiesce system procedure could return control to the
calling program before all export and DR data is successfully processed. If this occurs it was possible for an
orderly shutdown (that is, voltadmin shutdown without the --force
argument) to stop the cluster before all pending DR or export data was made durable. This rare race condition
has now been resolved. In the unusual case where a subselect statement of a partitioned table did not
need to be enclosed (that is, the outer SELECT statement did no filtering of the subselect
results), the VoltDB parser could produce incorrect results. This issue has been resolved. There was an issue with the ALTER TABLE statement when modifying a table with an existing
USING TTL clause. Altering the table to add or drop a column would result in the USING TTL clause being dropped
by mistake. This issue has been resolved. There was an issue where complex queries with many LEFT JOIN subclauses would consume
large quantities of heap space during the planning phase, ultimately running out of memory in the worst case.
This issue has been resolved. Note however, that such queries may still take a long time to execute once planned
and possibly exceed the query timeout limit. There was a rare edge case that could impact database replication (DR) and elastic
expansion. When adding nodes to a running cluster, DR is stopped and restarted. Due to a race condition, there
was a very rare possibility that the DR restart could generate an unexpected error such as "unable to find tuple
for deletion." When this happened, DR would stop and the cluster would have to restart from scratch to
reestablish replication. This issue has now been resolved. Previously, if a GEOGRAPHY column appeared in both an index and a view, the index was not
created correctly, potentially leading to a subsequent crash when a transaction attempted to update the index.
This issue has been resolved. Previously, attempting to start a VoltDB database using Kerberos authentication but an
invalid user name would, as expected, fail. However, the resulting error messages did not identify the principal
in use. The error messages have been improved to provide more information about the specific cause of the
failure. There was an issue in the VoltDB Enterprise Edition where, if you attempted to add a node
on the fly (known as elastic scaling) but did not have a license for database replication (DR), the operation
would cause the cluster to crash and interfere with restarting the cluster from command logs. This issue has
been resolved. There was an issue associated with partial indexes and the COUNT(*) function. If the
index did not cover all of the rows in the table (for example CREATE INDEX expensive product(cost)
WHERE cost > 5000; ) then a query selecting COUNT(*) and using the index could give a wrong answer.
This issue has been resolved.
|