Confluence site unreachable for some users

Incident Report for Confluence

Postmortem

Summary

On Jan 08, 2026 between 14:54 UTC and 16:30 UTC, Atlassian customers using Confluence Cloud product(s) experienced degraded service with view/edit page experiences. The event was triggered by database overload due to an unexpected large burst of traffic. The database overload was the result of a configuration change, which led to a sudden spike in database connections, which impacted a subset of customers in a single partition in the us-east region. The incident was detected within 1 minute by by automated monitoring systems and mitigated by reducing the number of web server hosts connecting to the database layer, which put Atlassian systems into a known good state. The total time to resolution was about 1 hour and 26 minutes.

IMPACT

The overall impact was between Jan 08, 2026, 14:54 UTC and 16:30 UTC on Confluence Cloud products. The incident caused service disruption to customers in a single partition in the us-east region impacting the ability to view and edit pages. We observed partition-wide database saturation due to an unexpected large burst of traffic. Confluence Cloud components impacted during this window were: Login, View / Edit Page, Publish Page, Add Page, Comment.

ROOT CAUSE

A change was introduced that caused cross-region traffic routing for customers instead of routing to the same region. Some caches processing customer data were stale, which heavily loaded the databases and caused them to restart. As a result, the users of the product above could not login, view or edit pages, and the users received HTTP 500 and 504 errors.

REMEDIAL ACTIONS PLAN & NEXT STEPS

We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific issue wasn’t identified due to a specific set of factors which resulted in this condition.

We are prioritizing the following improvement actions to help avoid repeating this type of incident:

  • Fix the routing issue that resulted in cross-regional traffic
  • Decrease maximum number of connections per database instance, to allow for sufficient memory capacity to handle a surge in connections

Furthermore, we deploy our changes progressively (by cloud region) to avoid broad impact but in this case, the impact was larger than desired. To minimize the impact of breaking changes to our environments, we will implement additional preventative measures such as enabling automatic vertical scaling when database clusters are running low on memory.

We apologize to customers whose services were impacted during this incident; we are taking steps to help improve the platform’s performance and availability.

Thanks,

Atlassian Customer Support

Posted Jan 28, 2026 - 01:50 UTC

Resolved

On Thursday, January 8, 2026, affected Confluence Cloud users in us-east-1 region may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.
Posted Jan 08, 2026 - 19:09 UTC

Update

The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.
Posted Jan 08, 2026 - 16:36 UTC

Monitoring

The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.
Posted Jan 08, 2026 - 16:35 UTC

Investigating

We are actively investigating reports of a partial service disruption affecting Confluence Cloud for some customers. We'll share updates here in an hour or as more information is available.
Posted Jan 08, 2026 - 15:27 UTC
This incident affected: View Content, Create and Edit, Comments, Authentication and User Management, Search, Administration, Notifications, Marketplace Apps, Purchasing & Licensing, Signup, Confluence Automations, Cloud to Cloud Migrations - Copy Product Data, Server to Cloud Migrations - Copy Product Data and Mobile (iOS App, Android App).