Looking for Jira Data Center to Cloud Migration Tips?
Breaking news (September 2025) - Data Center Ascend (Closure)
Site Concept
Migrating a Confluence to a "Site" - do notice a Site is in Atlassian terms a trusted scoped place, where not everything has access control.
If You need really strict access like segrgating customers from each other completely, a Site simply wont do, then You will need a site pr. customer.
Our migration failed as customers could see each others image, fullname and email in the new "Teams" - wich cant be disabled.
It You need 100% shutters between Confluence license accounts, You need more sites.
Preparing
There is few thing to do before the first test migration:
- Disable mail
- Limit access
- Install Apps - Notice this is not like Data Center - You cant disable and enable them....
Clean up
I do recommend to clean up first:
- Backup and delete spaces.
- Empty trashes or use Global retention to minimize versions and trash size. https://confluence.atlassian.com/clean/clean-up-your-confluence-instance-1026047969.html
- Use Scriptrunner to purge old versions of pages; can also be done via the API (as an overlap or alternative to Global- and space-retention)
Do not use - or be carefull using retention or clean up via Scriptrunner or other tools.
If You are using things like Comala Document Management (or stores important data in Page Properties for functionality)- the Retention will delete version of Documents that has state like "Published" or "Appoved". The retention motor knows nothing about Comala Document Management.
After that hard lession, I do NOT recommned using retention ot clean up at all in Page versions.
- Remove stale drafts: https://support.atlassian.com/confluence/kb/how-to-manually-remove-stale-drafts-from-confluence-database/
- Look for attachments (see sql under Script: Find attachment size sum pr space below), sometimes an On Premise Confluence has been used as a storage for files, so some pages may have a lot of (huge) attachments - and since there are no requirements in Confluence regading that attachments must be on a page or referenced, they just build up and people tend to see the page as a storage/drive.
- Investigate User Macro's - those are not supported in cloud (see Replacing USer Macros below). Delete all that can be deleted.
Testing
Do get Premium (or Enterprise) and use the Sandbox functionality for testing; at first I created a new site - its a hassle in all ways - creation and billing and termination.
In the sandbox You can always install all Apps from the real site without problems. Apps that are not in the real site are on trial/payment.
The sandbox migration will also give You a real time estimate - ours are around 38 Hours, but 2 times test to the sandbox was 7-8 hours.
You can test migrate spaces to Cloud as many times as needed.
There is another bug, where deletion of spaces fails see https://jira.atlassian.com/browse/CONFCLOUD-80839. Only solution is to make a ticket via Atlassian Support..
Nested Macros
My main problem was in overall Nested Macro's - in abundance... Like this (and via very complex User Macro code):
The sample in from an App, but we also had those in every other thinkable combinations...
In the cloud nesting in also (mostly) possible in very few cases due to the security model of apps; - maybe Forge will improve this.
But Nested Macros are problematic, typically pages are not converted to the new editor, making them almost impossible to edit in Cloud - without possible data loss.
Notice this bug: https://jira.atlassian.com/browse/CONFCLOUD-80511 - That Atlassian "Wont Fix" - Its not just expand macro in panels, but most Macro's in panels that will give problems.
Replacing User Macros
The best tip is the "User Macro for Confluence Cloud" App; as this will help in 3 parts:
- Actually migrate User Macros
- Make Macro replacements for un-migratable macros or Apps that You dont want in Cloud
- Make Dummy Macros to mitigate non-existing or non-functioning macros.
One of the great parts of the App is that You can define the Macro name Yourself - opposite Scriptrunner, where the Macro is always named sr-macro-some-uuid
Nesting is partly supported here - at least if the nested macro is also a User Macro macro - see this Loom video
This code is a code for replacing a Data Center Macro with a dummy macro in Cloud:
#if ($renderContext.outputType == "preview")
<div role="note" aria-labelledby="message-warning-title" class="aui-message aui-message-warning">
<p id="message-warning-title" aria-hidden="true" class="title">
<strong hidden="">Warning: </strong>
<strong>Removed afer migration</strong>
</p
<p>This macro is no longer available. Please delete it.</p>
</div>
#else
<script>
[50, 300, 700].forEach(t => setTimeout(() => AP.resize('0px', '0px'), t));
</script>
#end
In Cloud, its invisible in View mode, but when editing a page, its shows like this:
The complete macro for import: dummy-macro.json
The founder did this code for us, to replace an old macro showling a Google Calender:
#set($url = $parameters["urls"])
## Check the URL
#if(!$StringUtils.startsWith($url, "https://www.google.com/calendar/") && !$StringUtils.startsWith($url, "https://google.com/calendar/") && !$StringUtils.startsWith($url, "https://www.calendar.google.com/calendar/") && !$StringUtils.startsWith($url, "https://calendar.google.com/calendar/"))
<div role="note" aria-labelledby="title" class="aui-message aui-message-error">
<p id="title" aria-hidden="true" class="title">
<strong hidden>Error: </strong>
<strong>Invalid URL</strong>
</p>
<p>URL must starts with <i>https://google.com/calendar/</i> or <i>https://calendar.google.com/calendar/</i></p>
</div>
#stop
#end
## Check mode
#if($parameters["mode"])
#set($mode = $StringUtils.upperCase($parameters["mode"]))
#else
#set($mode = "MONTH")
#end
#set($url = "${url}&mode=${mode}")
## Set controls
#set($navigation = "&showTitle=0&showPrint=0&showTabs=0&showCalendars=0&showTz=0")
#set($none = "${navigation}&showNav=0&showDate=0")
#if ($parameters["controls"] == "navigation")
#set($url = "${url}${navigation}")
#elseif ($parameters["controls"] == "none")
#set($url = "${url}${none}")
#end ## all — is default state
## Convert first day of the week
#set($daysMap = {
"sunday": 1,
"monday": 2,
"tuesday": 3,
"wednesday": 4,
"thursday": 5,
"friday": 6,
"saturday": 7
})
#if($parameters["firstDay"])
#set($wkst = $daysMap.get($parameters["firstDay"]))
#else
#set($wkst = 1)
#end
#set($url = "${url}&wkst=${wkst}")
## Set calendar colors
#if ($parameters["colors"])
#set($color = $StringUtils.replace($parameters["colors"], "#", "%23"))
#set($url = "${url}&color=${color}")
#end
## Set language
#if ($parameters["language"])
#set($url = "${url}&language=${parameters.language}")
#end
## Set language
#if ($parameters["timezone"])
#set($url = "${url}&ctz=${parameters.timezone}")
#end
<iframe src="${url}"
width="${parameters["width"]}"
height="${parameters["height"]}"
style="border: 0"
frameborder="0" scrolling="no">
</iframe>
The complete macro for import: google-calendar-macro.json
Adaptavist Scriptrunner Macros
Quite a lot of scriptrunner Macros can be converted quite easily, the XHTML output sould not be changed, for me it was mostly to change API and external calls to Unirest.
Notice the parameters may get lost during migration, I am currently looking into that.
Unfortunately, Scriptrunner Macro names cant be set in cloud, its an automatic "sr-uuid" naming convention, why You might have to change the souce content in the Data Center- See below under Manipulations in the Database
Macro removal via Scriptrunner
To remove a macro in one or more pages, use this script (TAKE BACKUP FIRST).
Corrections according to https://community.atlassian.com/forums/Confluence-questions/Macro-removal-in-many-pages/qaq-p/2981874?utm_source=atlcomm&utm_medium=email&utm_campaign=mentions_reply&utm_content=topic#U3024976
import com.atlassian.confluence.pages.PageManager
import com.atlassian.sal.api.component.ComponentLocator
import org.jsoup.Jsoup
def space = '...'
def pageName = '...'
def pageManager = ComponentLocator.getComponent(PageManager)
def page = pageManager.getPage(space, pageName)
log.warn "Inspecting page ${page.title}"
def body = page.bodyContent.body
def parsedBody = Jsoup.parse(body)
// Remove the unwanted macros, we wanted to remove these three
parsedBody.select('ac|structured-macro[ac:name=getlastmodifiername]').remove()
parsedBody.select('ac|structured-macro[ac:name=getlastdatemodified]').remove()
parsedBody.select('ac|structured-macro[ac:name=getpageversion]').remove()
// Set prettyPrint to false to not break macro configurations with spaces and line breaks
parsedBody.outputSettings().prettyPrint(false)
// log.warn parsedBody.toString()
// Save the page
pageManager.saveNewVersion(page) { pageObject ->
pageObject.setBodyAsString(parsedBody.toString())
}
Manipulations in the database
To change for exaple macro names, run SQLs again the database before migration (TAKE BACKUP FIRST). Here I do replace some User Macro names with names of a new Scriptrunner Macro in Cloud.
\c confluence; update bodycontent set body=REPLACE(body,'ac:name="jiracsdb"','ac:name="sr-macro-97d59fcd-ce9a-4df3-a4db-c2deb57e36e6"') where body LIKE '%"jiracsdb"%';
Yes, I am aware this is ALL content, not just the latest version. Remember to restart and reindex afterwards.
Add pages in Data Center
This is based on a selection og Home pages via a category "customer" in the Space Directory and a selection in the Confluence Database
All Home pages are added 2 pages with the Storage Format code from 2 other pages (Id=480590905 and 480590813) (TAKE BACKUP FIRST):
#!/bin/bash
# set -x
# Query strings
SQL="SELECT s.homepage || '#' || s.SPACEKEY FROM CONTENT_LABEL cl LEFT JOIN LABEL l ON (l.LABELID = cl.LABELID) LEFT JOIN CONTENT c ON (c.CONTENTID = cl.CONTENTID) LEFT JOIN SPACES s ON (s.SPACEID = c.SPACEID) WHERE l.NAMESPACE = 'team' AND l.NAME = 'customer';"
# Declare and instantiate the space* arrays
declare -a spaceHomepage="`echo $SQL | psql -t -h localhost -U confluenceuser -d confluence`"
# Iterate through each Space
for sID in ${spaceHomepage[@]}
do
#Split $sID
HOMEPAGEID=$(echo $sID | cut -d "#" -f 1)
SPACEKEY=$(echo $sID | cut -d "#" -f 2)
# Customer Details
rm page1.json
curl -u username:password -X GET https://server.domain.com/rest/api/content/480590905?expand=body.storage > page1.json
rm customer-details.xml
cat page1.json | jq .body.storage.value > customer-details.xml
CUSTOMERDETAILS=$(cat customer-details.xml)
# Customer Information
rm page2.json
curl -u username:password -X GET https://server.domain.com/rest/api/content/480590813?expand=body.storage > page2.json
rm customer-information.xml
cat page2.json | jq .body.storage.value > customer-information.xml
CUSTOMERINFORMATION=$(cat customer-information.xml)
#Add Customer Information page
echo "Adding Customer Information page for space ${SPACEKEY} - ${HOMEPAGEID}"
echo "{\"type\":\"page\",\"space\":{\"key\":\"${SPACEKEY}\"},\"ancestors\":[{\"id\":${HOMEPAGEID}}],\"body\":{\"storage\":{\"value\":${CUSTOMERINFORMATION},\"representation\":\"storage\"}},\"title\":\"Customer Information\"}" > data.json
curl -u username:password -X POST -H 'Content-Type: application/json' --data @data.json https://server.domain.com/rest/api/content/
#Add Customer Details page
echo "Adding Customer Details page for space ${SPACEKEY} - ${HOMEPAGEID}"
echo "{\"type\":\"page\",\"space\":{\"key\":\"${SPACEKEY}\"},\"ancestors\":[{\"id\":${HOMEPAGEID}}],\"body\":{\"storage\":{\"value\":${CUSTOMERDETAILS},\"representation\":\"storage\"}},\"title\":\"Customer Details\"}" > data.json
curl -u username:password -X POST -H 'Content-Type: application/json' --data @data.json https://server.domain.com/rest/api/content/
done
Confluence willl reindex automatically.
Change Space Homes before Migration
This is based on a selection of Home pages via a category "customer" in the Space Directory and a selection in the Confluence Database
Each Home pages are updated with the Storage Format code from another page (Id=480590449) (TAKE BACKUP FIRST):
#!/bin/bash
# set -x
# Query strings
SQL="SELECT s.homepage || '#' || s.SPACEKEY FROM CONTENT_LABEL cl LEFT JOIN LABEL l ON (l.LABELID = cl.LABELID) LEFT JOIN CONTENT c ON (c.CONTENTID = cl.CONTENTID) LEFT JOIN SPACES s ON (s.SPACEID = c.SPACEID) WHERE l.NAMESPACE = 'team' AND l.NAME = 'customer';"
# Declare and instantiate the space* arrays
declare -a spaceHomepage="`echo $SQL | psql -t -h localhost -U confluenceuser -d confluence`"
# Iterate through each Space
for sID in ${spaceHomepage[@]}
do
#Split $sID
HOMEPAGEID=$(echo $sID | cut -d "#" -f 1)
SPACEKEY=$(echo $sID | cut -d "#" -f 2)
# Page Id of the page we want copy copy Storage Format from
FRONTSOUCE="480590449"
# Get newFront
rm page1.json
curl -u username:password -X GET https://server.domain.com/rest/api/content/${FRONTSOUCE}?expand=body.storage > page1.json
cat page1.json | jq .body.storage.value > front.xml
FRONT=$(cat front.xml)
# Get existing version
rm page2.json
curl -u username:password -X GET https://server.domain.com/rest/api/content/${HOMEPAGEID}?expand=version > page2.json
TITLE=$(cat page2.json | jq .title)
VERSION=$(cat page2.json | jq .version.number)
VERSION=$(($VERSION+1))
#Replace front page
echo "Replace front for space ${SPACEKEY} - ${HOMEPAGEID}"
echo "{\"id\":\"${HOMEPAGEID}\",\"type\":\"page\",\"space\":{\"key\":\"${SPACEKEY}\"},\"version\":{\"number\":$VERSION},\"body\":{\"storage\":{\"value\":${FRONT},\"representation\":\"storage\"}},\"title\":$TITLE}}" > data.json
curl -u username:password -X PUT -H 'Content-Type: application/json' --data @data.json https://server.domain.com/rest/api/content/$HOMEPAGEID
done
Confluence willl reindex automatically.
Custom Domains
This is the (in/famous CLOUD-6999) - Read more here.
Using Custom Domaisn seems still not to be adviseable, as some (reports say) Apps (this was Comala) do not work prpoperly with it.
And API access will still be via the Atlassian URL, not the custom domain.
As of this writing, a few open tickets:
https://jira.atlassian.com/browse/AX-667 - Allow API calls with Custom Domains
https://jira.atlassian.com/browse/CONFCLOUD-81367 - Add an option to remove /wiki from Confluence URL
https://jira.atlassian.com/browse/CONFCLOUD-57248 - Page URLs have Content ID digits
Furthermore, the non-base URL part of URLs are not maintaned in cloud, so this from Data center:
https://server.domain.dk/display/ATLASSIAN/Best+Practices+for+Jira
Will not work in cloud as:
https://server.domain.dk/display/ATLASSIAN/Best+Practices+for+Jira
Whereas:
https://server.domain.dk/wiki/display/ATLASSIAN/Best+Practices+for+Jira
Will work - so if Your docBase is /wiki/ - its better - but we use docBase=/ - the is what CONFCLOUD-81367 is about.
We will do a redirect so when the URL
https://server.domain.dk/display/ATLASSIAN/Best+Practices+for+Jira
is using, we will redirect it to:
https://domain.atlassian.net/wiki/search?spaces=ATLASSIAN&title=true&text=Best+Practices+for+Jira
CCMA Version
To ensure that the CCMA (Confluence Cloud Migration Assistant) for Production is the same as used for Test, use the Dark Feature
migration-assistant.disable.app-outdated-check
This will be bypass the normal CCMA requirement to use the newest version of CCMA.
App: Comala Document Management Migration
The state of the Cloud App is getting more and more like the DC App
Before migration, be aware of:
- All Restricted pages should be unrestricted, see https://appfire.atlassian.net/wiki/spaces/CDML/pages/648479588/Migrating+to+Confluence+Cloud#4.2-Remove-page-restrictions
- Export Comala Data for all Spaces, so the parameters will have values in cloud, see https://appfire.atlassian.net/wiki/spaces/CDML/pages/648479588/Migrating+to+Confluence+Cloud#4.4-Export-metadata
- Global workflows is available in Cloud, but not (of current writing) supported by the Migrator
- Workflows in Cloud does not support parameters in the transitions.
- Parameter Type "List" is not supported and wont be migrated.
After migration, we needed to go into all workflows and set parameters with the "Page" scope; of cause default is "Workflow". Unfortunately some parameters were empty afterwards.
Be aware that the Cloud App has a huge difference between "Space Workflows" and "Page Workflow" - even though the same workflow is applied to a page:
- Applied Automatically via active workflow or labels - Workflow Parameters cant be changed and the backend parameters are fixed to the page.
- Applied manual via the "Add workflow" funtion in the byline, parameters can be changed.
App: Draw.io
Post migration You need to export from Data Center and Apply to the migrated site - several times.
https://www.drawio.com/blog/confluence-drawio-migration
This creates new versions of all pages with draw.io diagrams on... but seems not to send mail.
Feature request submitted for SQL Job pre-migrate.
App: Visibility
No problems discovered
App: Reporting
There is no path - and the Cloud version has no support for Comala Document Management at all. Feature request submitted.
App: Scriptrunner
Most code can run in Cloud with some "minor" changes - among other converting web calls to Unirest
Scriptrunner Macros kan (also) be used to replace traditional User Macros
Script: Find attachment size sum pr space:
in my case, I found the one space accounted for 40% of the total attachment size
SELECT s. SPACEID, s. SPACENAME, sum(LONGVAL) / POWER(1024, 3) AS size_gigabytes FROM CONTENTPROPERTIES c JOIN CONTENT co ON c. CONTENTID = co. CONTENTID JOIN SPACES s ON co. SPACEID = s. SPACEID WHERE c. CONTENTID IN (SELECT co. CONTENTID FROM CONTENT WHERE co. CONTENTTYPE = 'ATTACHMENT') AND c. PROPERTYNAME = 'FILESIZE' GROUP BY s. SPACENAME, s. SPACEID ORDER BY sum(LONGVAL) DESC;
the result was:
spaceid | spacename | size_gigabytes -----------+-----------------------------------------------+------------------------ 83296263 | Space1 | 213.2755583645776 79691777 | Space2 | 26.135045495815575 1376265 | Space3 | 20.59067330416292 196411402 | Space4 | 10.277705752290785 16318466 | Space5 | 7.418440759181976 10452995 | Space6 | 6.755659217946231
So out of aprox 465 GB - one space took up 213 GB, and it was not really visible in the UI that any files it it was huge.
If this looks totally wrong as it did in our instance, contact Atlassian Support. In our case, some pages had a parent that was deleted, so we gave them a new parent:
confluence=# update "content" set parentid=83355984 where contentid=109214277; UPDATE 1 confluence=# update "content" set content_status='current' where contentid=109214277; UPDATE 1
Clear cache - its now searchable and visible. And we deleted them, freeing 200 GB.

