December 12th, 2025
Fixed

Kibana reporting is genuinely useful—right up to the moment you rely on it for executive updates, compliance evidence, customer-facing PDFs, or “send this only if something changed.” Then it starts behaving like what it really is: a headless screenshot pipeline bolted onto an interactive UI.
This post isn’t a dunk. It’s a practical map of where Kibana reporting shines, where it predictably breaks, and what teams do when they hit the wall.
Kibana’s PDF/PNG reporting is built around rendering what you see on screen into an export. Under the hood, reports are generated on the Kibana server as background jobs coordinated through Elasticsearch documents.
And the rendering itself is based on headless Chromium (Kibana manages Chromium binaries and drives the browser for screenshotting / PDF exports).
This architecture implies two important truths:
Reporting inherits UI fragility. If the UI struggles to render a view reliably, reporting will struggle too.
Reporting is “what’s on the screen,” not “what’s true.” It’s a presentation capture mechanism, not a semantic reporting engine.
That’s fine—until your use case is not a screenshot.
Most reporting needs are comparative:
“Did error rate change since last week?”
“Is this spike new or just seasonality?”
“Only notify if the KPI moved materially.”
“Send the PDF only if something significant changed.”
Kibana reporting doesn’t have a native concept of diffing between runs, conditional delivery, or “what changed since last export.” It creates a PDF/PNG of the current dashboard state.
Elastic’s own wording around reporting reinforces this “what you see” model: PDF reports are tied directly to what is seen on screen.
Why this matters: In real teams, attention is the scarce resource. Static scheduled PDFs quickly become noise—people stop reading them because they don’t answer the question “why am I being pinged?”
If your dashboards are small and tidy, Kibana reporting can be smooth. But real dashboards aren’t always small and tidy—especially in mid-market orgs where dashboards become living shared artifacts.
Elastic’s own troubleshooting guidance acknowledges that large pixel counts (big dashboards, lots of panels) can demand more memory/CPU and suggests splitting dashboards into smaller artifacts.
In practice, teams run into:
PDFs with unusable pagination or layout
Panels stretched, clipped, or missing
“For printing” exports that time out or format awkwardly
These aren’t hypothetical. Community reports describe large dashboards producing a single giant unprintable page or poorly paginated PDFs with cut-off / stretched visualizations.
And for truly huge canvases or dashboards, people end up increasing memory and timeouts dramatically and still failing—because you’re essentially asking a headless browser to deterministically render a complex app view into a document.
When reporting fails, Kibana surfaces errors like “Max attempts reached.” Elastic documents two common causes:
The export spans a large amount of data and Kibana hits xpack.reporting.queue.timeout
Reverse-proxy / server settings are not configured correctly
This reveals a hidden cost: your team becomes the operator of a rendering farm.
Instead of “schedule report,” your backlog becomes:
tuning queue timeouts
tuning capture timeouts
resizing dashboards
splitting dashboards
debugging reverse proxy edge cases
chasing nondeterministic Chromium issues
That’s not reporting. That’s maintaining an internal PDF renderer.
Kibana alerting is solid for what it’s built to do: create rules against Elasticsearch data and send actions through connectors. Elastic positions it as a consistent interface across use cases, with integrations and scripting available.
But alerting and reporting live in different mental models:
Alerts are about signals: something crossed a threshold, a rule matched, an anomaly score tripped.
Reports are about communication: what changed, what it means, and what to do.
You can send an alert to Slack. You can attach a PDF. But Kibana doesn’t give you a first-class, built-in way to reliably produce human-ready, contextual, change-aware narratives (because its primitives are rules and screenshots).
So teams either:
spam alerts (and burn attention), or
schedule reports (and hope people read them), or
manually add context (and burn engineering time)
For compliance, the question is rarely “what does the dashboard look like right now?”
It’s:
“What was true on that date?”
“Can you prove it wasn’t tampered with?”
“Can you show consistent evidence collection over time?”
Kibana reporting can generate PDFs, but it’s not designed as a compliance evidence pipeline. If you’re in a regulated environment, you’ll feel the gap quickly: lack of run-to-run comparison, lack of explicit evidence controls, and the ease with which dashboards change after the fact.
(If you’re already collecting screenshots into a GRC folder manually, you know exactly what this costs.)
These are the patterns that appear again and again once Kibana reporting doesn’t fit.
This is even recommended in Elastic troubleshooting guidance.
Cost: redesign work and fractured storytelling. People lose the “single pane” view that made the dashboard valuable.
Elastic explicitly points at timeout settings like xpack.reporting.queue.timeout when exports fail.
Cost: ongoing ops toil. Reporting becomes another service to babysit.
Some teams rebuild reports in Canvas because it gives more control over page-like layouts (and community responses often point users there).
Cost: now you’re maintaining two artifacts: operational dashboards and report layouts.
This works—because Kibana itself uses a headless browser approach.
Cost: brittle scripts, auth headaches, constant UI changes breaking automation.
There’s a whole ecosystem of third-party Kibana reporting tools that exist for a eason: teams want scheduled delivery, fewer license constraints, and more control.
Cost: additional platform, integration, and security review—plus you still often end up with static screenshots.
You export small-to-medium dashboards
You’re okay with static PDFs/PNGs
You don’t need cross-tool reporting
“Send every Monday” is acceptable even when nothing changed
Stakeholders ask “what changed?” more than “what is it?”
Reports are frequently failing on large dashboards (timeouts/layout)
You need conditional delivery (only notify on meaningful change)
You need compliance-ready evidence artifacts
Your reality is multi-tool (Kibana + Grafana + SaaS + internal UIs)
This isn’t a moral failing of Kibana. It’s just not what Kibana reporting was designed to be.
If you’re designing for real-world reporting needs, these primitives matter:
Conditional reporting
“Only send if KPI moved by X”
“Only send if visual changed”
Run-to-run diff
detect change, summarize deltas, highlight what matters
Narrative context
explain “why this matters,” not just present charts
Multi-source support
authenticated web UIs + APIs, not just one stack
Operational reliability
reporting should not require you to become a Chromium SRE
Kibana reporting gives you the screenshot. Many teams need the communication system.
What Kibana lacks isn’t another export format.
It’s a layer that understands change over time.
Teams that move past screenshot-based reporting introduce a thin reporting layer that:
captures dashboards or data at regular intervals
compares current state to previous runs
generates reports only when something meaningfully changes
adds minimal narrative context for humans
Crucially, this layer does not replace Kibana.
Kibana remains the system of exploration.
The reporting layer becomes the system of communication.
Once teams adopt this pattern, reporting stops being noisy—and starts being trusted.
May 1st, 2025

Don’t let users start from scratch every time they create a job. It’s useful for:
Save time by anticipating some common tasks
Pre-bake some company-branded PDF templates
Reuse the boilerplate of a login process to a particular website
You can create unlimited templates. You can also promote a job to become a template.

May 1st, 2025

Let’s use “calculate” action to create a dynamic threshold value for our “conditional block” action.
Capture the Kibana Discover’s hits count integer value of the last hour
Capture the same, but for the last 24 hours
Use the “calculate” action to obtain the hourly mean hit count over the past 24h
Use the “conditional block” action to compare the last hour reading to the 24h mean
If the difference is less than 20%, don’t send the alert
This example is by default included in the job templates in any new installation of Anaphora.
May 1st, 2025

Thanks to the conditional block, we can create proper alerts. A simple example:
Capture a string/numeric value from a website
Capture numeric/string value from another website
Compare the two
Skip sending the report if condition is met
This is useful for brute force attack detection, or any other alerts about high log event count over time.
Capture Kibana discovery query results “count” for the last hour
Compare if this value is > 1 million
Send the report (notify the system admin)
April 30th, 2025

Keeping all the jobs running is a great responsibility. Fortunately, Anaphora can use a delivery interface to send an alert to you via email, slack, webhook (etc.) about the insurgence of errors.
No danger of getting spammed, as a maximum notification frequency can be set.
Alternatively, you can use the REST API to monitor the green, yellow, red state of each job.
April 30th, 2025

You can now capture all visualizations of a Kibana dashboard into single tiles to selectively reassemble by drag & drop into your visual PDF report composer in Anaphora.

You can now make your PDF prettier by adding a page background (PNG, SVG, solid colors, gradients, etc). Trick: use lower opacity to watermark the background!
April 30th, 2025

When you have a multi-step capture, clicks, navigations, selectors can get tricky to get right. So we introduced the capability to produce, visualize and download the full video of the headless browser operations, and see what went wrong.

You can also get the debug files containing the browser internal state, so it’s easier for us to support you when things are hard to debug.
April 30th, 2025


On the same note of ReadonlyREST’s tenancies, users and roles can be associated using “permissions” to an unlimited number of spaces.
A space is a virtual container of Jobs and Delivery Interfaces. Users and roles can be associated to a number of spaces via permissions of any access levels between read-only, read-write, or Admin.

Users that have admin access level to 2 or more spaces can copy resources across between spaces

April 30th, 2025

Navigate to settings > system settings to find all the available authentication and authorization connectors configurations.

We support LDAP from the major open source LDAP server implementations like OpenLDAP or LemonLDAP, but also Microsoft Active Directory Domain Services (AD DS), IBM Security Directory Server, etc.
SAML SSO 2.0 is available to connect to your enterprise centralised authentication, it works with Keycloak, Azure ADFS, Okta, One Login, and many others.

OpenID Connect support is finally added! Keycloak is our reference implementation, but this one is a real passepartout for the world of the internet.
