# Description
Exposes new metrics:
* Created Bookings (Excl. Cancelled)
* Cancelled Created Bookings
* Check Out Bookings (Excl. Cancelled)
* Cancelled Check Out Bookings
Re-naming of existing metrics:
* Created Bookings -> Total Created Bookings
* Checkout Bookings -> Total Check Out Bookings
Removes exposure of previous Cancelled Bookings. In the monthly by deal model, it's hardcoded still - need to change PBI to remove safely. This will be done later on once we remove the Cancelled Bookings models.
Adapts the existing tests on KPIs to accommodate for the changes by including new metrics. Also, I've set the detector to 5 (from 8) since it's been a while this has not triggered thus might be worth the effort to have more detection capabilities.
# Checklist
- [X] The edited models and dependants run properly with production data.
- [X] The edited models are sufficiently documented.
- [X] The edited models contain PK tests, and I've ran and passed them.
- [X] I have checked for DRY opportunities with other models and docs.
- [X] I've picked the right materialization for the affected models.
# Other
- [ ] Check if a full-refresh is required after this PR is merged.
Related work items: #24637
# Description
Adapts revenue figures in Main KPIs - MTD scope or global view. This includes MTD, Monthly Overview, Global Evolution over Time, Detail by Category. In essence, everything that is not by deal.
The changes are mainly 2:
* Remove the line that deducts the `Waiver Amount Paid Back to Hosts` in all metrics except the `Waiver Net Fees`. This effectively means that the previous `Guest Revenue` = `Guest Payments`, thus I dropped all 3 `Guest Payments` metrics.
* Do a renaming at metric display level, but not in the code. This means that I remove the computation of `guest_revenue_in_gbp` for instance and keep `guest_payments_in_gbp`, and apply the renaming later on, since the modelisation already accounts for defining metric names differently from those of the fields. For the rest of metrics, I revised all metrics name and did changes based on the [whiteboard](https://whiteboard.office.com/me/whiteboards/p/c3BvOmh0dHBzOi8vZ3VhcmRob2ctbXkuc2hhcmVwb2ludC5jb20vcGVyc29uYWwvcGFibG9fbWFydGluX3N1cGVyaG9nX2NvbQ%3d%3d/b!T2D3opQuBECSDnhuFZrUacFu3TxvSvdIsnI4Dxsh2IuaB1AigbciRqkqte61I4wz/01H5SI4J4L7HTPJGUT7JGYKTOSQYYWACXU). I also changed the dedicated data tests in Main KPIs to ensure it's working. I also changed the exclusion logic in reporting based on the name of the metric to not display metrics that depend on the invoicing cycle unless it's 2 months ago or before.
To keep in mind:
* Merging this will automatically display the new figures/naming in production. Might be wise to communicate to stakeholders since some key metrics (namely, Guest Revenue / Total Revenue) will change the meaning.
* We also need to do these changes in the metrics by deal part of the computation. I'd do first the removal of these fields in the PBI report (and take the opportunity to change the Data Catalogue) and then do the PR in DWH to change the logic. Before that though let's check that the names included in this PR are the correct ones :)
# Checklist
- [X] The edited models and dependants run properly with production data.
- [NA] The edited models are sufficiently documented.
- [X] The edited models contain PK tests, and I've ran and passed them.
- [NA] I have checked for DRY opportunities with other models and docs.
- [NA] I've picked the right materialization for the affected models.
# Other
- [ ] Check if a full-refresh is required after this PR is merged.
Related work items: #22688
# Description
This PR adapts the outlier detection test for KPIs. Specifically:
1) It removes not additive lifecycle metrics, which are:
- Churning Listings/Deals
- Listings/Deals Booked in Month/6 Months/12 Months
this is because the test computes data at daily level by just doing value/number of days. The thing is that for all these metrics, Listing/Deal bookings are computed **uniquely over a month**, i.e., if a listing is booked 100 times in a single month, it will only appear as once. Thus it makes it fail on early days of the month. Similar case for Churn, in this case, at the beginning of the month we have the total maximum number of listing/deals that are expected to churn if nothing happens, and this can decrease a bit over time if these get reactivated.
2) I reduced the variance threshold from 10 to 8, meaning now the alerts will raise more often. This is because we're removing some wrongly assessed metrics from the computation, thus I feel we can leave with better fine-grained detection. It could be even further reduced (8 is still super high tolerance) since today maximum signal-to-noise ratio was less than 4 on checkout bookings, but I'd propose to see how it goes in the following days and then assess if it's necessary to reduce it even further.
# Checklist
- [X] The edited models and dependants run properly with production data.
- [ ] The edited models are sufficiently documented.
- [ ] The edited models contain PK tests, and I've ran and passed them.
- [ ] I have checked for DRY opportunities with other models and docs.
- [ ] I've picked the right materialization for the affected models.
# Other
- [ ] Check if a full-refresh is required after this PR is merged.
# Description
Adds 2 tests for KPIs to:
1. Check that the values observed in the last run for the dimensions other than Global are consistent for additive metrics. Consistent does not mean exact, though
2. Check that the values observed in the last run for Global dimension are similar to what was observed previously, to raise an alert in case of outliers. This one is tricky because there's possibilities to have false positives, so extensive documentation on the test and parameters has been provided.
Note: This runs well targeting production. It also detects the cancelled bookings issue if it was supposed to run on 31st of August. Once an alert is raised, since it only takes into account the last update, usually will not raise it in the next day.
# Checklist (does not apply)
- [ ] The edited models and dependants run properly with production data.
- [ ] The edited models are sufficiently documented.
- [ ] The edited models contain PK tests, and I've ran and passed them.
- [ ] I have checked for DRY opportunities with other models and docs.
- [ ] I've picked the right materialization for the affected models.
# Other
- [ ] Check if a full-refresh is required after this PR is merged.
Adding specific business tests for kpis
Related work items: #20824