Installing with a shared schema
Configuring a shared schema environment for Bravura Security Fabric offers several advantages:
Better utilization of a single backend, as long as the backend is running on a dedicated database server.
Cheaper solution for load balancing, because the Bravura Security Fabric servers don't need to be as robust.
Shared schema servers can be added without the need to synchronize databases first.
Bravura Security strongly recommends creating at least two or three replicated Bravura Security Fabric servers for fault tolerance and backup. When Bravura Privilege is installed, the recommended minimum is three. See Calculating the number of servers required for more information.
To configure a shared schema environment, ensure that:
All server clocks are synchronized.
All servers have the proper database client software installed and configured to connect to the same backend.
All servers have the same Bravura Security Fabric instance name.
All servers have the same communication key (or Master Key), database encryption key, workstation authentication encryption key, Connector encryption key, and IDMLib encryption key.
It is recommended that all servers have Bravura Security Fabric installed in the same directory on each local server.
There are two methods of configuring a shared schema:
Complete a regular install of Bravura Security Fabric using a new database. After the installation, redirect the Database Service using the
iddbadminutility to use an existing database schema. Contact support@bravurasecurity.com for assistance;Or
During Using setup to install Bravura Security Fabric :
Select Use a preconfigured dedicated database user to install the new instance when prompted.
At the page, enter the login credentials for the login account you used for the backend connection on the first server.
Click Advanced to load the page.
Deselect the Install schema checkbox. This causes the installer to check if the specified database schema already has the required tables installed. If the tables do not exist, a warning message is displayed.
Deselect the Populate default data checkbox.
Click OK to close the Advanced Database Configuration Options page.
Click Next to continue with installation. Once installation is complete, the new instance starts sharing a schema with the instance you specified connection details for.
For details on the Database Server configuration page, see Pre-configured database server settings.
Shared-schema servers can be used in a replication environment. See Configuring replication servers for more information.
Coordinating multi-node upgrades
When upgrading a shared schema environment with multiple nodes, the primary node must complete its post-upgrade tasks and start services before secondary nodes proceed past their own post-upgrade tasks. This sequencing is critical because secondary nodes depend on the primary's running services.
In GUI mode, the installer naturally pauses at the Post Upgrade Tasks Complete dialog, allowing the operator to coordinate primary and secondary node sequencing manually.
For silent or automated upgrades, use the --pause-after-tasks flag to achieve the same coordination. When this flag is specified, the installer creates a signal file (upgrade-pause.signal) in the instance directory after post-upgrade tasks complete and waits for the file to be removed before proceeding to start services.
The recommended automation workflow is:
Run setup.exe -U -silent --pause-after-tasks -instance <instance> on the primary node. The flag is required on the primary to prevent services from starting before secondaries have completed their post-upgrade tasks.
Wait for the signal file (
upgrade-pause.signal) to appear in the primary's instance directory, confirming that post-upgrade tasks are complete.Run setup.exe -U -silent --pause-after-tasks -instance <instance> on each secondary node. The flag is recommended on secondaries to ensure the primary's services are fully running before secondaries attempt to start services and replicate. Without it, there is a race condition if a secondary finishes post-upgrade tasks before the primary's services are up.
Wait for the signal file to appear on each secondary node.
Delete the signal file on the primary node. The installer resumes, starts services, and completes the upgrade.
Wait for the primary node upgrade to finish completely.
Delete the signal files on the secondary nodes. Each secondary resumes, starts services, and completes its upgrade.
Tip
For deterministic automation workflows, use the --pause-after-tasks flag on all nodes and control the sequencing by deleting signal files in order: primary first, then secondaries after the primary completes.
See setup for the full list of setup.exe command-line arguments.