Making cluster-safe plugins

In clustered Bitbucket Server, plugins largely just work. However, there are a few things to be aware of in more advanced plugins. Testing your plugin in a cluster is also more involved and requires some additional configuration.

Home directory

Bitbucket Server has a local home and a shared home for all instances, not just clustered instances. This is intended to make it simpler for plugin developers to write their plugins, knowing that BITBUCKET_HOME will be laid out consistently on standalone and clustered instances. The home directory is laid out as follows:

BITBUCKET_HOME
|-- bin
|-- caches
|-- export
|-- lib
|-- log
|-- shared (BITBUCKET_SHARED_HOME)
|   |-- config
|   |-- data
|   |   |-- attachments
|   |   |-- avatars
|   |   |-- repositories
|   |-- plugins
|   |   |-- installed-plugins
|   |-- bitbucket.properties
|-- tmp

BITBUCKET_SHARED_HOME, by default, is BITBUCKET_HOME/shared. Plugin developers should not rely on this, however; the location of BITBUCKET_SHARED_HOME can be overridden using environment variables or system properties. Instead, plugin developers should use:

  • ApplicationPropertiesService.getHomeDir() => BITBUCKET_HOME
  • ApplicationPropertiesService.getSharedHomeDir() => BITBUCKET_SHARED_HOME

In a clustered environment, BITBUCKET_SHARED_HOME is guaranteed to be the same filesystem on every node, allowing data that is stored there to be accessed by all nodes.

Warning:


BITBUCKET_SHARED_HOME will generally be a network mount, such as an NFS partition. This imposes some special considerations:

  • Performance is likely to be slower than a local disk
  • Some filesystem-level behaviors, like locking and renaming, may not work as expected (or at all)
  • NFS configuration issues may trigger unexpected/unsafe behavior

Where possible, plugins should minimize their use of the filesystem if they need to use BITBUCKET_SHARED_HOME.


A note about dependencies

You can ensure a compatible version of shared libraries, like Atlassian Beehive, Atlassian Cache and Atlassian Scheduler, is used by importing Bitbucket Server's parent POM, like this:

 <dependencyManagement>
     <dependencies>
         <dependency>
             <groupId>com.atlassian.bitbucket.server</groupId>
             <artifactId>bitbucket-parent</artifactId>
             <version>${bitbucket.version}</version>
             <type>pom</type>
             <scope>import</scope>
         </dependency>
     </dependencies>
 </dependencyManagement>

Where bitbucket.version is defined as the minimum version of Bitbucket Server you want your plugin to support.

A note about serialization

Communication among cluster nodes is facilitated by Java Serialization. This means, when using distributed types such as remote() Atlassian Cache Caches, job data in Atlassian Scheduler (even if you're using RunMode.RUN_LOCALLY!), or the BucketedExecutor, the objects you use must be Serializable. Externalizable extends Serializable and is also supported.

Most Bitbucket Server types, such as Project and Repository, are not Serializable and cannot be used directly in these contexts. Instead, their IDs (Project.getId(), Repository.getId(), the pair of [PullRequest.getToRef().getRepository().getId(), PullRequest.getId()], etc.) should be serialized and then their respective services (ProjectService.getById(int), RepositoryService.getById(int), PullRequestService.getById(int, long), etc.) should be used to re-retrieve the full objects as necessary. The services themselves are also not Serializable.

Because objects must be re-retrieved prior to processing, plugin code should account for the fact that the state of the objects may have changed:

  • Projects and repositories can be deleted, so getById(int) calls should appropriately handle null returns
  • Pull requests may be updated, referencing new commits or target branches or being merged or declined
  • Etc.

This is not intended as an exhaustive list–rather to promote good programming practice and more robust processing. If any of the exact state of the object at serialization time matters, the relevant state should be extracted and included in the object's serialized form. This should be kept to a minimum, however, to keep the serialized representation of objects as small as possible. Large serialized blobs have impacts both on Bitbucket Server's memory footprint and on the efficiency of inter-node communication.

Note:


Bitbucket Server standalone instances are considered one-node clusters. That means the same Serializable rules apply regardless of whether multiple nodes are actually present. This is intended to make plugin developer's lives simpler. Bitbucket Server behaves consistently both clustered and standalone, so plugins written for a cluster will work correctly standalone.


Caching in a cluster

In simple plugins, it is common to cache data using ConcurrentMaps or Guava Caches. This caching will not work correctly in a cluster because updating the data on one node will leave stale data in the caches on other nodes.

Plugins should use Atlassian Cache, an API provided by Bitbucket Server for plugins. You can add Atlassian Cache to your plugin with the following Maven dependency:

  <dependency>
     <groupId>com.atlassian.cache</groupId>
     <artifactId>atlassian-cache-api</artifactId>
     <scope>provided</scope>
 </dependency>

To use Atlassian Cache, you:

  • Add <component-import key="cacheFactory" interface="com.atlassian.cache.CacheFactory"/> in atlassian-plugin.xml
  • Add the CacheFactory to the relevant component's constructor
  • Create the cache. You can also pass CacheSettings (created through the CacheSettingsBuilder class) to control many aspects of how the cache works. Caches are remote() by default.
cacheFactory.getCache("com.example.plugin:example-plugin-key:Example  Cache", 
             new CacheLoader<String, String>() {
                 @Nonnull
                 @Override
                 public String load(@Nonnull String key) {
                     return "Value";
                 }
             }
     );
  • You should create your Cache once, in your constructor, and use the same instance afterward
    • Continuously re-fetching the cache from the CacheFactory is inefficient

Note:


If you are using a remote() cache (the default), your keys and values must be Serializable. Externalizable extends Serializable and is also acceptable.


Scheduling jobs in a cluster

Without any intervention, scheduled tasks will execute independently on each Bitbucket Server instance in a cluster. In some circumstances, this is desirable behavior. In other situations, you will need to use cluster-wide locking to ensure that jobs are only executed once per cluster. This is accomplished by using Atlassian Scheduler.

You can add Atlassian Scheduler to your plugin with the following Maven dependency:

 <dependency>
     <groupId>com.atlassian.scheduler</groupId>
     <artifactId>atlassian-scheduler-api</artifactId>
     <scope>provided</scope>
 </dependency>

To use Atlassian Scheduler you:

  • Add <component-import key="schedulerService" interface="com.atlassian.scheduler.SchedulerService"/> to your atlassian-plugin.xml
  • Add the SchedulerService to the relevant component's constructor
  • Register your JobRunner
  • Schedule your job, assigning it an ID and providing its JobConfig which describes:
    • How often the job should run
    • The delay for the initial run
    • Whether the job should run on each cluster node or once across the cluster
  • Unregister your JobRunner during shutdown

A JobRunner handles JobRunnerRequests and performs the actual processing. Generally each node in a cluster will register its own JobRunner. This allows all of the nodes in the cluster to run the job, allowing the cluster to more efficiently distribute load.

 public class MyJobRunner implements JobRunner {
     @Override
     public JobRunnerResponse runJob(JobRunnerRequest request) {
         //Do some meaningful work

         return JobRunnerResponse.success();
     }
 }

 schedulerService.registerJobRunner("com.example.plugin:example-plugin-key:ExampleJobRunner", new MyJobRunner());

When a job is scheduled, the key assigned to the JobRunner when it is registered is used to associate the job with its runner:

 schedulerService.scheduleJob( 
         JobId.of("com.example.plugin:example-plugin-key:ExampleJob"), 
         JobConfig.forJobRunnerKey("com.example.plugin:example-plugin-key:ExampleJobRunner") 
                 .withRunMode(RunMode.RUN_ONCE_PER_CLUSTER) 
                 .withSchedule(Schedule.forInterval(intervalInMillis, new Date(System.currentTimeMillis() + intervalInMillis)))); 

During application shutdown, you should unregister the JobRunner so that the node shutting down is no longer considered a candidate for running the job:

 schedulerService.unregisterJobRunner("com.example.plugin:example-plugin-key:ExampleJobRunner"); 

The easiest way to put together the register and unregister lifecycle is to use the Spring InitializingBean and DisposableBean interfaces on your component:

 public class ExampleComponent implements DisposableBean, InitializingBean { 

     private static final JobId JOB_ID = JobId.of("com.example.plugin:example-plugin-key:ExampleJob"); 
     private static final long JOB_INTERVAL = TimeUnit.MINUTES.toMillis(30L); 
     private static final String JOB_RUNNER_KEY = "com.example.plugin:example-plugin-key:ExampleJobRunner"; 

     private final SchedulerService scheduler; 

     public ExampleComponent(SchedulerService schedulerService) { 
         this.schedulerService = schedulerService; 
     } 

     @Override 
     public void afterPropertiesSet() throws SchedulerServiceException { 
         //The JobRunner could be another component injected in the constructor, a 
         //private nested class, etc. It just needs to implement JobRunner 
         schedulerService.registerJobRunner(JOB_RUNNER_KEY, new MyJobRunner()); 
         schedulerService.scheduleJob(JOB_ID, JobConfig.forJobRunnerKey(JOB_RUNNER_KEY) 
                 .withRunMode(RunMode.RUN_ONCE_PER_CLUSTER) 
                 .withSchedule(Schedule.forInterval(JOB_INTERVAL, new Date(System.currentTimeMillis() + JOB_INTERVAL)))); 
     } 

     @Override 
     public void destroy() { 
         schedulerService.unregisterJobRunner(JOB_RUNNER_KEY); 
     } 
 } 

Note:


Job data provided in JobConfig is required to be Serializable, regardless of RunMode. The backing store for job data may serialize objects even for RunMode.RUN_LOCALLY jobs.


Warning:


Generally you should not unregister the job itself. Unregistering a job unregisters it across the cluster, not just on the node shutting down.

When multiple nodes schedule the same job but with a different schedule (even differing by milliseconds) then the last registration will win and replace the old job configuration and schedule. If the schedule is eligible to run immediately and multiple nodes take this action at close to the same time, then the job might run more than once as the instances replace one another.

Also note that scheduled jobs in Bitbucket Server are not persistent. They must be rescheduled each time the application starts.


Locking in a cluster

Java's locking primitives, like Lock, synchronized, etc., only apply to a single JVM and will not properly serialize operations in a cluster. Instead, you need to use the cluster-wide lock. This is accomplished by using:

  • Atlassian Beehive's ClusterLockService
    • Atlassian Beehive is cross-product and also works in Confluence and JIRA
  • LockService, part of the bitbucket-api module
    • LockService is specific to Bitbucket Server

Atlassian Beehive

You can add Atlassian Beehive to your plugin with the following Maven dependency:

 <dependency> 
     <groupId>com.atlassian.beehive</groupId> 
     <artifactId>beehive-api</artifactId> 
     <scope>provided</scope> 
 </dependency> 

To use Atlassian Beehive's ClusterLockService you:

  • Add <component-import key="clusterLockService" interface="com.atlassian.beehive.ClusterLockService"/> to your atlassian-plugin.xml
  • Add the ClusterLockService to the relevant component's constructor
  • Create your ClusterLock (which extends the standard Java Lock interface):
 public class ExampleComponent { 

     private final ClusterLock taskLock; 

     public ExampleComponent(ClusterLockService lockService) { 
         taskLock = lockService.getLockForName(getClass().getName() + ":TaskLock"); 
     } 

     public void performTask() { 
         if (taskLock.tryLock()) { 
             try { 
                 //Do something, knowing no other node in the cluster is accessing 
                 //whatever resource you're protecting 
             } finally { 
                 taskLock.unlock(); 
             } 
         } else { 
             //Another node in the cluster holds the lock already 
         } 
     } 
 } 

LockService

You can use the LockService by adding a dependency on bitbucket-api, generally already a dependency of any Bitbucket Server plugin:

 <dependency> 
     <groupId>com.atlassian.bitbucket.server</groupId>
     <artifactId>bitbucket-api</artifactId>
     <scope>provided</scope> 
 </dependency> 

The LockService is used in a similar way to Atlassian Beehive's ClusterLockService:

  • Add <component-import key="lockService" interface="com.atlassian.bitbucket.concurrent.LockService"/> to atlassian-plugin.xml
  • Add the LockService to the relevant component's constructor
  • Create your Lock:
 public class ExampleComponent { 

     private final Lock taskLock; 

     public ExampleComponent(LockService lockService) { 
         taskLock = lockService.getLock(getClass().getName() + ":TaskLock"); 
     } 

     public void performTask() { 
         if (taskLock.tryLock()) { 
             try { 
                 //Do something, knowing no other node in the cluster is accessing 
                 //whatever resource you're protecting 
             } finally { 
                 taskLock.unlock(); 
             } 
         } else { 
             //Another node in the cluster holds the lock already 
         } 
     } 
 } 

In addition to Locks, the LockService provides access to more specialized RepositoryLocks and PullRequestLocks.

  • RepositoryLock allows concurrent operations on different Repository instances, but serializes operations on the same instance
  • PullRequestLock allows concurrent operations on different PullRequest instances, but serializes operations on the same instance

These locks can be used to reduce contention, by allowing concurrent operations on different instances, while still ensuring each instance is acted on serially. These locks are cluster-safe, meaning only one node in the cluster will operate on a given Repository or PullRequest at once.

Note:


ClusteredLock, Lock, PullRequestLock and RepositoryLock are not Serializable and cannot be transferred between nodes.

  • Locks can only be unlocked by the thread that acquired them
  • Locks cannot be used as job data with Atlassian Scheduler

RepositoryLock and PullRequestLock are namespaced. The same Repository or PullRequest can be locked simultaneously in multiple RepositoryLock or PullRequestLock instances, respectively, which have different names.

It is not possible, from a plugin, to access the locks the host application uses to protect its own processing. They are intentionally stored in an unreachable namespace.


Executors in a cluster

ExecutorServices are useful for managing threaded jobs. Bitbucket Server provides a ScheduledExecutorService which can be imported by plugins to use a standard thread pool. However, ExecutorServices are local to the node where they are created. In a cluster, to efficiently distribute processing, it is sometimes desirable to allow scheduling a task on one node and processing it on another. To facilitate this, Bitbucket Server provides a BucketedExecutor in bitbucket-api, which is generally a dependency of any Bitbucket Server plugin.

 <dependency> 
     <groupId>com.atlassian.bitbucket.server</groupId>
     <artifactId>bitbucket-api</artifactId>
     <scope>provided</scope> 
 </dependency> 

To use the BucketedExecutor you:

  • Add <component-import key="concurrencyService" interface="com.atlassian.bitbucket.concurrent.ConcurrencyService"/> to atlassian-plugin.xml
  • Add ConcurrentService to the relevant component's constructor
  • Create your BucketedExecutor:
 public class MyTaskRequest implements Serializable { 
     //Repository is not Serializable 
     private final int repositoryId; 

     public MyTaskRequest(Repository repository) { 
         repositoryId = repository.getId(); 
     } 

     public int getRepositoryId() { 
         return repositoryId; 
     } 
 } 

 Function<MyTaskRequest, String> bucketFunction = new Function<MyTaskRequest, String>() { 
     @Override 
     public String apply(MyTaskRequest task) { 
         return String.valueOf(task.getRepositoryId()); 
     } 
 } 

 BucketProcessor<MyTaskRequest> processor = new BucketProcessor<MyTaskRequest>() { 
     @Override 
     public void process(@Nonnull String bucketId, @Nonnull List<MyTaskRequest> tasks) { 
         for (MyTaskRequest task : tasks) { 
             Repository repository = repositoryService.getById(task.getRepositoryId()); 
             if (repository == null) { 
                 log.info("Repository {} was deleted", task.getRepositoryId()); 
                 continue; 
             } 
             //Do some processing 
         } 
     } 
 } 

 BucketedExecutor<MyTaskRequest> executor = concurrencyService.getBucketedExecutor( 
         "com.example.plugin:example-plugin-key:ExampleBucketedExecutor", 
         new BucketedExecutorSettings.Builder<>(bucketFunction, processor) 
                 //How many tasks to process at once? Integer.MAX_VALUE processes the 
                 //whole bucket, 1 will receive one task at a time
                 .batchSize(Integer.MAX_VALUE) 
                 //How many retries, if processing fails? After the retries are 
                 //exhausted, the requests that failed will be discarded 
                 .maxAttempts(1) 
                 //How many threads can process tasks (from different buckets) at the 
                 //same time? Concurrency can be PER_NODE or PER_CLUSTER 
                 .maxConcurrency(config.getThreadCount(), ConcurrencyPolicy.PER_CLUSTER) 
                 .build()); 

Each BucketedExecutor is given a Guava Function which is used to divide the buckets. The plugin developer is free to define buckets as coarse or fine as desired. The BucketedExecutor offers two very useful guarantees:

  • Tasks will always be passed to the BucketProcessor in the same order they were submitted in
  • Exactly one thread may process a given bucket a time, regardless of concurrency
    • Concurrency allows multiple buckets to be processed simultaneously
    • This means BucketProcessors generally do not require locking, if the buckets are well-defined

Warning:


The task type used to specialize the BucketedExecutor generic must be Serializable. Even standalone instances (which are considered one-node clusters) will serialize the tasks as they are submitted, prior to invoking the BucketProcessor.

An bucket's concurrency policy can be either ConcurrencyPolicy.PER_CLUSTER or ConcurrencyPolicy.PER_NODE. PER_CLUSTER is used if you need to throttle concurrency because of a global resource (e.g. a remote service or shared file system). PER_NODE is used if you need to throttle concurrency because of a local resource (e.g. CPU or memory on the node)

When ConcurrencyPolicy.PER_CLUSTER is used, the concurrency limit is divided by number nodes in the cluster to determine how many buckets each node can process concurrently. The result is rounded up, such that every node in the cluster is always allowed to process at least one bucket.

  • maxConcurrency(2, ConcurrencyPolicy.PER_CLUSTER) in a three-node cluster behaves like maxConcurrency(1, ConcurrencyPolicy.PER_NODE) - 2/3 = .667 ~ 1 per node
  • maxConcurrency(3, ConcurrencyPolicy.PER_CLUSTER) in a two-node cluster behaves like maxConcurrency(2, ConcurrencyPolicy.PER_NODE) - 3/2 = 1.5 ~ 2 per node

Event handling in a cluster

Bitbucket Server does not offer cluster-wide events. Events, such as RepositoryPushEvent, are handled only on the node that raised them. In other words, whichever node processed the push will be the only node that processes events for that push. This is an intentional design decision. The development team feels that this makes implementing a clustered plugin simpler, because plugin developers are not required to prevent re-processing the same event on each node.

Plugin installation in a cluster

Installation for the Bitbucket Server cluster administrator is the same as with a single instance. Uploading a plugin through the web interface will store the plugin in BITBUCKET_SHARED_HOME and ensure that it is installed on all instances of the cluster.

Currently, cluster instances must be homogeneous. However, future plans may introduce support for rolling upgrades and other features that introduce disparities, whether temporary or permanent, between cluster nodes. Plugin developers can assume all nodes will:

  • Have compatible versions of all exported APIs
  • Have consistent home directory layouts

For the best forward compatibility, plugin developers should not assume all nodes are running the same version of Bitbucket Server.

Plugin testing in a cluster

It is important to test your plugin in a cluster. When running Bitbucket Server via the Atlassian SDK (AMPS) a clustered license is used, so multiple instances started via the Atlassian SDK can be clustered.

Alternatively, you can install the following timebombed license, which is cluster-enabled. This license is only valid for 3 hours, after which you will be unable to push to Bitbucket Server without restarting the servers:

AAABAA0ODAoPeNptUE1Lw0AUvPdXLHhOybZGsbCgpiE2aJrSDZ6
f8dUuJJvy3ibYf2+M22LF63wxM1ev+C6yzgopRXi3iORiFopkq8
UslNcTdsD7adJD3YEzrVU7qBk9HBOO4BIcqm95EN4EUp7Y1jqoX
A4NqgdXA7MBKzSyQ/KSFzDWoQVbYfJ5MHQck4r5k+cHu+lROerw
MjQZnLVyY9Y9nMKnVdt43bOp0DLq4wHHAnpYtMrTyRapR1ot1WN
WboLNPNZBWmZRoKPb1FsLIGeRRpuH1vQB1vDPA+ctnhw6Q4zDDv
pdNO+aN6T1rmQkVnJ22evftUVH1R4Y/975BcASkF4wLQIVAJHuX
Zz1SsymUm2B5V7p7Pap48xzAhROyzM1l9a1OqcWzxseRNmnZ4Xq
mQ==X02d9

The two easiest way to start a cluster of Bitbucket Server nodes are:

  • Run your integration tests (which will start and stop your cluster automatically in the process of deploying your plugin and running tests on it)
  • Use atlas-run to spin up a cluster which you can use to iteratively develop your plugin, deploy it and test its cluster safety

Both methods require Maven configuration but the same can be used for both.

To configure an N-node cluster you must specify N <product/> elements, one per node. Each node will need slightly different configuration to ensure it can independently start up (e.g. so each listens on different ports if running on the same machine), find other nodes in the cluster and finally join the cluster.

Specifically, each node will need:

  • Its own HTTP port (supplied through the httpPort element)
  • Its own SSH port (supplied through the plugin.ssh.port entry in the systemPropertyVariables element)

All nodes will need to share:

  • A common BITBUCKET_SHARED_HOME directory (supplied through the bitbucket.shared.home entry in the systemPropertyVariables element)
  • An external database (connection details supplied through the jdbc.* entries in the systemPropertyVariables element)

Each node will also need a way to find other nodes. This is supplied through the hazelcast.network.tcpip entry of the systemPropertyVariables element for TCP/IP and/or the hazelcast.network.multicast entry in the systemPropertyVariables element for IP multicast. Without one of these settings set to true (both are false by default) a node will never look for other nodes and thus never join a cluster.

The following Maven pom.xml configuration will start up a cluster of two Bitbucket Server nodes. Node 1 uses port 7991 for HTTP and 7997 for SSH and node 2 uses port 7992 for HTTP and 7998 for SSH. Both nodes use TCP/IP to find each other and use the default TCP/IP settings. They both use a BITBUCKET_SHARED_HOME of /opt/bamboo-agent/bamboo-agent-home/xml-data/build-dir/STASH-BR44RELEASE-PERFORM/target/checkout/docs/target/bitbucket-node-1/home/shared and connect to the same MySQL database called bitbucket. Also note that because they are connecting to a MySQL database the MySQL JDBC driver jar must be made available to Bitbucket Server. This is achieved through the libArtifact entry for mysql:mysql-connector-java.

<build>
    <plugins>
        <plugin>
            <groupId>com.atlassian.maven.plugins</groupId>
            <artifactId>bitbucket-maven-plugin</artifactId>
            <version>${amps.version}</version>
            <extensions>true</extensions>
            <configuration>
                <products>
                    <!-- Node 1 -->
                    <product>
                        <id>bitbucket</id>
                        <instanceId>bitbucket-node-1</instanceId>
                        <version>${bitbucket.version}</version>
                        <dataVersion>${bitbucket.data.version}</dataVersion>
                        <!-- override the HTTP port used for this node -->
                        <httpPort>7991</httpPort>
                        <systemPropertyVariables>
                            <bitbucket.shared.home>/opt/bamboo-agent/bamboo-agent-home/xml-data/build-dir/STASH-BR44RELEASE-PERFORM/target/checkout/docs/target/bitbucket-node-1/home/shared</bitbucket.shared.home>
                            <!-- override the SSH port used for this node -->
                            <plugin.ssh.port>7997</plugin.ssh.port>
                            <!-- override database settings so both nodes use a single database -->
                            <jdbc.driver>com.mysql.jdbc.Driver</jdbc.driver>
                            <jdbc.url>jdbc:mysql://localhost:3306/bitbucket?characterEncoding=utf8&amp;useUnicode=true&amp;sessionVariables=storage_engine%3DInnoDB</jdbc.url>
                            <jdbc.user>bitbucketuser</jdbc.user>
                            <jdbc.password>password</jdbc.password>
                            <!-- allow this node to find other nodes via TCP/IP -->
                            <hazelcast.network.tcpip>true</hazelcast.network.tcpip>
                            <!-- set to true if your load balancer supports stick sessions -->
                            <hazelcast.http.stickysessions>false</hazelcast.http.stickysessions>
                        </systemPropertyVariables>
                        <libArtifacts>
                            <!-- ensure MySQL drivers are available -->
                            <libArtifact>
                                <groupId>mysql</groupId>
                                <artifactId>mysql-connector-java</artifactId>
                                <version>5.1.32</version>
                            </libArtifact>
                        </libArtifacts>
                    </product> 
                    <!-- Node 2 -->
                    <product>
                        <id>bitbucket</id>
                        <instanceId>bitbucket-node-2</instanceId>
                        <version>${bitbucket.version}</version>
                        <dataVersion>${bitbucket.data.version}</dataVersion>
                        <!-- override the HTTP port used for this node -->
                        <httpPort>7992</httpPort>
                        <systemPropertyVariables>
                            <bitbucket.shared.home>/opt/bamboo-agent/bamboo-agent-home/xml-data/build-dir/STASH-BR44RELEASE-PERFORM/target/checkout/docs/target/bitbucket-node-1/home/shared</bitbucket.shared.home>
                            <!-- override the SSH port used for this node -->
                            <plugin.ssh.port>7998</plugin.ssh.port>
                            <!-- override database settings so both nodes use a single database -->
                            <jdbc.driver>com.mysql.jdbc.Driver</jdbc.driver>
                            <jdbc.url>jdbc:mysql://localhost:3306/bitbucket?characterEncoding=utf8&amp;useUnicode=true&amp;sessionVariables=storage_engine%3DInnoDB</jdbc.url>
                            <jdbc.user>bitbucketuser</jdbc.user>
                            <jdbc.password>password</jdbc.password>
                            <!-- allow cluster nodes to find each other over TCP/IP thus enabling clustering for this node -->
                            <hazelcast.network.tcpip>true</hazelcast.network.tcpip>
                            <!-- set to true if your load balancer supports stick sessions -->
                            <hazelcast.http.stickysessions>false</hazelcast.http.stickysessions>
                        </systemPropertyVariables>
                        <libArtifacts>
                            <!-- ensure MySQL drivers are available -->
                            <libArtifact>
                                <groupId>mysql</groupId>
                                <artifactId>mysql-connector-java</artifactId>
                                <version>5.1.32</version>
                            </libArtifact>
                        </libArtifacts>
                    </product> 
                </products>
                <testGroups>
                    <!-- tell AMPS / Maven which products ie nodes to run for the named testGroup 'clusterTestGroup' -->
                    <testGroup>
                        <id>clusterTestGroup</id>
                        <productIds>
                            <productId>bitbucket-node-1</productId>
                            <productId>bitbucket-node-2</productId>
                        </productIds>
                    </testGroup>
                </testGroups>
            </configuration>
        </plugin>

        ...

    </plugins>
</build>

...

<properties>
    <bitbucket.version>4.0.0</bitbucket.version>
    <bitbucket.data.version>4.0.0</bitbucket.data.version>
    <amps.version>6.1.0</amps.version>
</properties>

Warning:


amps.version should be set to the same version as the one the minimum supported Bitbucket Server for your plugin uses. Use of dependencyManagement and <scope>import</scope> in you pom.xml as discussed earlier will only import dependencies, not properties or plugins so this value will need to be manually synchronised with Bitbucket Server's as you change your minimum supported Bitbucket Server.


To run the cluster configured above via Atlassian AMPS you would run:

atlas-run --testGroup clusterTestGroup

To run your integration tests in Maven against the cluster configured above, the following would normally suffice:

atlas-mvn clean install

For both methods above you will almost always want a load-balancer running to balance HTTP and SSH traffic between the nodes (and so that you can use a single port per protocol to communicate with the cluster). Taking the above configuration as an example you would want your load-balancer to balance HTTP traffic on port 7990 (standalone Bitbucket Server's HTTP default) to ports 7991 and 7992. For SSH traffic you would want it to balance SSH traffic on port 7999 (standalone Bitbucket Server's SSH default) to ports 7997 and 7998.

Atlassian provides a simple Maven plugin which you can configure and run as a load balancer. Again taking the above configuration as an example, you would add the following to your Maven POM:

<build>
    <plugins>
      <plugin>
        <groupId>com.atlassian.maven.plugins</groupId>
        <artifactId>load-balancer-maven-plugin</artifactId>
        <version>1.1</version>
        <executions>
            <execution>
                <id>start-load-balancer</id>
                <phase>pre-integration-test</phase>
                <goals>
                    <goal>start</goal>
                </goals>
            </execution>
            <execution>
                <id>stop-load-balancer</id>
                <phase>post-integration-test</phase>
                <goals>
                    <goal>stop</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
          <balancers>
              <balancer>
                <port>7990</port>
                <targets>
                  <target>
                    <port>7991</port>
                  </target>
                  <target>
                    <port>7992</port>
                  </target>
                </targets>
              </balancer>
              <balancer>
                <port>7999</port>
                <targets>
                  <target>
                    <port>7997</port>
                  </target>
                  <target>
                    <port>7998</port>
                  </target>
                </targets>
              </balancer>
          </balancers>
        </configuration>
      </plugin>
    </plugins>
</build>

When you run your integration tests from Maven, before starting the cluster, this plugin will start a load balancer as configured and stop it once your tests have finished and the cluster has been shut down.

If you start your Bitbucket Server cluster via atlas-run --testGroup clusterTestGroup, you can run the load balancer separately via:

atlas-mvn com.atlassian.maven.plugins:load-balancer-maven-plugin:1.1:run

Marking your plugin as cluster compatible for the Marketplace

When you list your first cluster-compatible plugin version in the Marketplace, modify your atlassian-plugin.xml descriptor file. This tells the Marketplace and UPM that your plugin is cluster compatible. Add the following parameter inside the plugin-info section:

<param name="atlassian-data-center-compatible">true</param> 

Here's an example of a generic plugin-info block with this param:

<plugin-info>
    <description>Base POM for Atlassian projects</description>
    <version>4.4.1</version>
    <vendor name="Atlassian" url="http://www.atlassian.com" />
    <param name="atlassian-data-center-compatible">true</param>
</plugin-info>