On Jan 08, 2026 between 14:54 UTC and 16:30 UTC, Atlassian customers using Confluence Cloud product(s) experienced degraded service with view/edit page experiences. The event was triggered by database overload due to an unexpected large burst of traffic. The database overload was the result of a configuration change, which led to a sudden spike in database connections, which impacted a subset of customers in a single partition in the us-east region. The incident was detected within 1 minute by by automated monitoring systems and mitigated by reducing the number of web server hosts connecting to the database layer, which put Atlassian systems into a known good state. The total time to resolution was about 1 hour and 26 minutes.
The overall impact was between Jan 08, 2026, 14:54 UTC and 16:30 UTC on Confluence Cloud products. The incident caused service disruption to customers in a single partition in the us-east region impacting the ability to view and edit pages. We observed partition-wide database saturation due to an unexpected large burst of traffic. Confluence Cloud components impacted during this window were: Login, View / Edit Page, Publish Page, Add Page, Comment.
A change was introduced that caused cross-region traffic routing for customers instead of routing to the same region. Some caches processing customer data were stale, which heavily loaded the databases and caused them to restart. As a result, the users of the product above could not login, view or edit pages, and the users received HTTP 500 and 504 errors.
We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific issue wasn’t identified due to a specific set of factors which resulted in this condition.
We are prioritizing the following improvement actions to help avoid repeating this type of incident:
Furthermore, we deploy our changes progressively (by cloud region) to avoid broad impact but in this case, the impact was larger than desired. To minimize the impact of breaking changes to our environments, we will implement additional preventative measures such as enabling automatic vertical scaling when database clusters are running low on memory.
We apologize to customers whose services were impacted during this incident; we are taking steps to help improve the platform’s performance and availability.
Thanks,
Atlassian Customer Support