← Home
Atlassian

Atlassian

Driving Product Performance at Scale

Performance of Atlassian's flagship products was getting slow and becoming a barrier for cloud adoption for some of the largest customers. When I joined, the approach was entirely reactive — heroic firefighting that bought temporary gains before new features quietly ate them back. The goal was to change that permanently. Over my tenure, the Perf Push program drove 40–60% latency improvements across the most-used experiences in Jira, Confluence, JSM, and Rovo, while building the tooling, guardrails, and culture to make those wins sustainable.

Role

Head of Technical Program Management

Timeline

2024 – Present

The Challenge

The most frequently used experiences across Jira, Confluence, and JSM were slow, taking anywhere from 6 to 10+ seconds on load at p90. Performance posture was entirely reactive, driven by customer complaints rather than proactive measurement and monitoring. Every new feature was a potential regression, and there was no tooling to catch them before they reached users and no experimentation platform to validate whether fixes were actually helping. Complex product architectures meant each product required its own approach to move the needle, each with its own backward compatibility burden, add-on ecosystem, and backend constraints. In the absence of an engineering partner early on, I was driving the cross-org alignment required to turn Performance into a success story.

Approach

  • Established the measurement foundation the program needed to operate from. Took the TTVC (Time to Visually Complete) metric forward, ensuring correctness and full automation across products, standardized how performance work gets done across the Atlassian product family, and built the tooling infrastructure for measurement, monitoring, regression prevention, and remediation. Partnered with PM on customer feedback, CSS insights, and surveys to validate where to invest. Led bi-weekly Perf Council with product leadership to make trade-offs and prioritize investments.
  • Drove foundational engineering investments that served two goals: improving performance and modernizing the codebase, the latter being notoriously hard to get funded without a compelling performance case. We gave product teams a framework to evaluate opportunities across four categories: time to first byte, visual data query latency, SSR latency, and bundle size reduction, then drove the investments that addressed each. Platform investments included Bifrost (Atlassian's frontend PaaS for deploying and managing applications like Jira and Confluence) adoption with cache hit ratio versioning to optimize cached performance across software versions, SSR infrastructure hardening with enforced timeouts and conditional bundling, and Forge optimization that improved cross-region API latency by 63%. Completed the React 18 migration across multiple products, unlocking 500ms–1000ms p90 improvements and enabling streaming SSR and selective hydration, allowing users to see and interact with content sooner rather than waiting for the full page. Each product team analyzed their layer cake to identify the longest chain and optimize critical path queries and APIs.
  • Built an AI-powered regression detection and remediation system to shorten the fixing cycle, identifying regressions down to the PR or experiment. Pioneered Atlassian's first adoption of Statsig as an experimentation platform, enabling engineering teams to gate features and changes behind experiments and make data-driven decisions based on real customer exposure data before broad rollout. A system that continues to evolve, with our learnings actively shaping Statsig's product roadmap.
  • Built a lean team of 6 senior and Principal TPMs with deep ownership across the program, covering measurement, remediation, prevention, and product performance across every product in scope.

Results

  • KR scores turned around from 0.5 in Q1 to 0.9–1.0 across Jira, Confluence, and JSM by end of H2, driven by 40–60% latency improvements across the most-used experiences in each product.
  • Regression prevention went from zero in FY24 to 88 in FY25 and 101 in H1 FY26 alone, with time to root cause reduced by 67% (from 6 days to 2 days). Remediation speed more than doubled, with regressions resolved within 5 business days improving from 31% to 66%.
  • Atlassian's first adoption of Statsig (since acquired by OpenAI) for performance validation, now an active part of how engineering teams ship. Blocking regressions before broad rollout and continuing to evolve with our learnings feeding back into Statsig's product roadmap.
  • Performance program expanded beyond Jira, Confluence, and JSM to include Rovo, with measurable latency improvements across its most-used experiences.
  • Team satisfaction scores of 100 on team cohesion and 88 on manager dimension.
Atlassian team

Fully assimilated. Vegemite pending. 😉

What I Learned

  • Peer leadership matters for cross-org influence. Not having an engineering leader as a peer made the early work harder. Once that relationship was in place, influencing change at cross-org leadership level became significantly easier. Identify who you need as a peer early and invest in building that partnership.
  • What worked at Meta doesn't map directly to Atlassian. Patterns from past experience provide a starting point, but you have to assess the actual talent, architecture, and gaps before setting expectations with leadership on what's achievable and when.
  • Complex architectures require TPM as the glue. Products with backward compatibility burdens, add-on ecosystems with no guardrails, and backends not designed for performance at scale all tend to operate in their own bubbles. Bringing teams together requires building technical credibility with tech leads first, understanding each team's priorities, and then jointly influencing leaders. Each product requires its own investment.
  • Vegemite is still a work in progress, this one's going to be slow 😉

What Others Say

Kal has done a phenomenal job on keeping the performance effort on track and leading it to 0.9 KR. This is no mean feat — it involves so many engineers across so many teams. It has offense and defense elements. Overall — a super excellent job. She has been easy to partner with, she provides really good updates on the bi-weekly perf meeting. She is buttoned up — knows when to push back when she feels strongly about something.

Taroon Mandhana

VP of Product

Kal has been the tip of the spear when it has come to driving the massive impact that the Perf KR achieved in FY25 H2. Her leadership, influence and ability to organise a complex multiple-org program such as this has been exemplary. As an incredibly time poor leader myself, Kal has an amazing ability to cut through the noise and ensure I'm across critical aspects of the program so I can engage to unblock or move things forward. In terms of programs with heavy TPM involvement, Perf has been the best run by far in my view.

Bevan Blackie

Head of Engineering, Jira Product

During the performance push initiative, Kal demonstrated exceptional leadership by putting customer experience at the center of all decisions. She is the catalyst behind the perf push success. She successfully guided the team to achieve outstanding scores (1.0 for Jira, 0.9 for JSM and Confluence). Her maturity in enabling team autonomy while providing guidance allowed team members to succeed in their respective areas — experimentation, tooling, and Confluence initiatives.

Kamal Yassin

Principal TPM