Adds Online Certificate Status Protocol (OCSP) support to the federated Openfire setup:
- Add certificate generation script with full PKI hierarchy
- Add certificate import script for Openfire keystores
- Implement OCSP responder service via Docker compose
- Update documentation with OCSP usage instructions
The -o flag can now be used with start.sh to enable OCSP support.
This updates the configuration to use a SNAPSHOT build of the Hazelcast plugin 5.5.0-1, and updated its configuration files accordingly.
Note that this requires Openfire (the container) to use Java 17 or later.
This adds support for IPv6, by giving all `start.sh` scripts an `-6` argument, that causes a dualstack configuration to be loaded.
Each individual docker-compose file has been split out. Now, each file no longer defines any networking. Instead, one of two networking fragments is expected to be merged in.
When starting Openfire, a Hazelcast configuration option is passed through to the Openfire process to denote preference for IPv4 or IPv6. This passing through depends on the change in Openfire, that is introduced by 2634d4a83a
Minor other changes have been applied, that mostly make the start scripts more consistent amongst each-other.
fixes#61
- Unified structure for auto-loading plugins for different modes of operation
- Added in heapdump/monitoring/jsxc plugins for auto-loading
- Modified scripts to work with (docker compose 2) default of using hyphens for creating service names
- Fixed copy monitoring plugin script and added copy hazelcast plugin script
- Extra scripts for blocking and unblocking server 2 in federation mode
By setting the 'debug' flag in the database, Openfire will, at boot time, override the configuration in log4j2.xml.
When debug logging is desired, it's best to configure that in the log4j2.xml file instead of in the database.
Openfire 4.7.0 brings a change to the name of the log files. They've been renamed from 'all.log' to 'openfire.log'.
This commit applies the same change to this project.
Prevents otherxmpp always preferring a connection to node 3 in the cluster.
My testing shows clients from both sides being able to initiate and connect to a MUC on the other side of the s2s connection, although I'm certain that the timings and strategy might want tuning for anything production-like (and perhaps even for testing environments if we hit issues)