diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md
new file mode 100644
index 0000000..88b8cb5
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md
@@ -0,0 +1,1089 @@
+# Data News
+
+A place to communicate progress, achievements, updates and generally any new thing from the Data Team. Come regularly to stay updated.
+
+# 2025-07-07
+
+## Pablo leaves the team
+
+Pablo here. Most of you should know by now, but in case you’ve been hiding under a rock, I’m one week away from leaving Truvi.
+
+After a very sweet tenure where I got to start and grow the Data team, I’ve decided to give it a shot to a position in a wildly different company. It’s been a tough decision, because I’ve had (and I’m still having) a great time at Truvi. But sometimes you need to sacrifice something great to see if you can score something even better.
+
+My final day will be 14/07. In the meanwhile, feel free to reach out for both work related stuff or just a simply goodbye coffee. And rest assured, Data team will continue forward as usual: Uri and Joaquín will continue the work and keep delivering, just as always.
+
+## What drives resolution incidents?
+
+It’s been a while since our latest Data News entry… because we’ve been busy carrying out in-depth analysis on what causes resolution incidents, in the scope of the Data-Driven Risk Assessment project (or just DDRA).
+
+In these past few weeks, the three members of the Data Team have analysed independently data around New Dash Protected Bookings and how these have turned out to generate (or not) a Resolution Incident: from characteristics of the Booking, the Booking Services, the Listing, the Guest and Guest Journey, Occupancy Rates… and so on.
+
+After a consolidation effort, we’re happy to announce that we have a new Data Paper available!
+
+Here are some highlights from the analysis:
+
+- Longer stays are about 3x more likely to result in a resolution incident than shorter ones.
+- Screening and Deposit Management services - while protective - are associated with a 2-3x higher likelihood of incidents.
+- Larger listings (more bedrooms or bathrooms) show a 3-5x increase in incident likelihood.
+- Lower expected occupancy at check-in time makes bookings 2x more likely to run into trouble.
+- Younger guests (on average) carry a 2-5x higher risk of a resolution incident.
+
+But we didn’t stop there. We trained two Machine Learning models (AI) using the most predictive features we found, and compared them to our current flagging system. The result? A contactless ML model that we estimate it performs up to 10x better than our existing process.
+
+For additional, in-depth detail, we encourage you to take a look at the Data Paper:
+
+[2025-07-07 Understanding what causes Resolution Incidents and how to predict them](https://www.notion.so/2025-07-07-Understanding-what-causes-Resolution-Incidents-and-how-to-predict-them-2250446ff9c980649761d9ded927a023?pvs=21)
+
+## Revenue Churn targets have been updated
+
+Among the different targets we track in Main KPIs, we recently noticed that the target for Revenue Churn Rate was supposed to be of 1%, and not the 3% we were showing.
+
+A quick update has been released to capture the correct targets. This has affected both Revenue Churn Rate and Revenue Churn (in GBP) metrics.
+
+
+
+Yep that was easy… once we found it…
+
+## More informative data alerts
+
+The Data team regularly monitors most of the tables we have in the DWH with automatic data tests. These tests review the data in our DWH regularly to ensure they comply with what we expect and to avoid data quality issues that could affect our reporting.
+
+Until recently, our alerts simply *raised* the alert, but didn’t provide much detail into what was exactly wrong. But since some recent work we’ve done, we’re now receiving very detailed reports via Slack any time one of these alerts gets triggered. With this, the team can identify the issue and the root cause much faster and conveniently, and we’re avoiding having to read through some terribly formatted log files in a server.
+
+With this, we will be able to continue catching issues before they hit you and your reports.
+
+## First invoicing cycle after the Guest Products release
+
+On July 1st, Data team delivered the invoicing exports for the finance team as usual. But this occasion was a bit special, since it was the first cycle after the release of Guest Products by the Guest Squad. This new release offers a lot of possibilities for our dashboard guest journey, but came with the price of many of our foundation code being changed. This impacted the invoicing process.
+
+During May and June, Data team worked hard to ensure that all our DWH, reporting and invoicing related code and tables were ready for the delivery of this new release. And finally, on June 1st, we made the first exports with the new version of data.
+
+So far things have been smooth, so it seems we’ve overcome the challenge. Now it’s time for the Guest Squad to come up with great ideas and keep leveraging the possibilities that come with the new changes they’ve made.
+
+# 2025-06-13
+
+## Categorising New Dash users for alerting purposes
+
+This week we’ve also worked on tagging New Dash accounts for alerting purposes. The idea behind this is to be able to:
+
+- Quickly identify which accounts needs reviewing and what potential action needs to be done.
+- Understand the impact in terms of Listings & Bookings per each category
+
+This is a long entry, but please read carefully as it’s very relevant.
+
+### User categories definition
+
+We have created 11 user categories, which are:
+
+- **00 - No Alert**: Users have all their active listings with upgraded programs and these have generated upgraded bookings. Also, these users have not churned. All is good!
+- **01 - No Listings**: The user has not churned and does not have any listings. We should check if there’s a problem on the integration or the listing setup.
+- **02 - No Active Listings**: The user has not churned and does not have any active listings. We should check if there’s a problem on the integration or the listing setup.
+- **03 - No Bookings - No Upgraded Program in Listings**: The user has not churned, has active listings but these only contain Basic Screening. It has not generated Bookings so far. We should check why there’s no upgraded programs applied to listings.
+- **04 - No Bookings - Has Upgraded Program in Listings**: The user has not churned, has active listings with upgraded programs, but there’s no bookings. It’s possible that the user is recently onboarded or migrated. However, if it’s not the case, we should understand why there’s no bookings.
+- **05 - Only Basic Screening Bookings - No Upgraded Program in Listings**: The user has not churned, has active listings and has bookings - but everything is at Basic Screening! This is an upsell opportunity or showcasing a problem with the migration/onboarding of the account.
+- **06 - Only Basic Screening Bookings - Has Upgraded Program in Listings**: The user has not churned, has active listings with upgraded programs but all bookings are Basic Screening. It’s possible that these listings were upgraded recently and we’re waiting for the bookings. However if the problem persists, we should understand why the upgraded listings are not generated upgraded bookings.
+- **07 - Has Upgraded Bookings - No Upgraded Program in Listings**: The user has not churned, has had upgraded bookings in the past but currently it can only generate Basic Screening bookings! This is an important point to understand why has moved to free services, and a good upselling opportunity.
+- **08 - Has Upgraded Bookings - Not all Listings have Upgraded Program Applied**: The user has not churned, has some upgraded bookings and has some active listings with upgraded programs. Seems ok, right? No - because a few bookings are still Basic Screening, due to the fact that there’s some active listings that do NOT have upgraded programs.
+- **98 - Has Churned:** User has churned, recommended to exclude if wanting to understand the current business snapshot.
+- **99 - Has Data Quality Issues**: A few edge cases such as the account not having a live date in HubSpot or having the MVP launch as the initial date in New Dash. For more techy profiles to check what’s going on. Recommended to exclude for business audiences.
+
+### How are these user categories being used?
+
+The user category it’s only available in **New Dashboard Reporting** → **New Dashboard Overview**, in Power BI. [Link is available here](https://app.powerbi.com/groups/me/apps/d6a99cb6-fad1-4e92-bce1-254dcff0d9a2/reports/44d8eee3-e1e6-474a-9626-868a5756ba83/915f40519c0a301c209c?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi).
+
+There’s 2 main areas in which this categorisation is available in the report:
+
+- New tab **User Alerts**, as a summary or overview
+- Existing tab **User Detail**, as a dedicated filter and field in the table
+
+Let’s start with the new tab called User Alerts, which looks like this:
+
+
+
+New tab User Alerts
+
+As rows we can observe the 11 distinct user categories, including a total row. These are split in columns between users migrated from Old Dash and New Business users; and it also includes a totals column.
+
+In here we represent 5 different measures:
+
+- **Users**: count of users that are under a certain category
+- **Active Listings**: count of listings that are active - that can generate bookings.
+- **Active Listings with Upgraded Program**: count of listings that are active and that can generate upgraded bookings; meaning, bookings beyond Basic Screening
+- **Total Bookings**: total count of bookings, including Upgraded bookings + Basic Screening bookings
+- **Upgraded Bookings**: count of bookings that are not Basic Screening bookings.
+
+→ Note that Basic Screening Bookings would be Total Bookings minus Upgraded Bookings.
+
+A simple coloured flag system is in place to identify the severity of each alert: Red = very bad, Yellow = bad, Green = good and Black = edge cases.
+
+It’s worth mentioning that even if an alert is yellow, if tons of users/listings/bookings are affected might increase the necessity to act!
+
+Let’s continue with User Detail. We’ve added the new filter User Categorisation which can be used to select one or many alerts at the same time. You can combine this with additional filters, as well. For instance, if selecting the alert #03 - No Bookings - No Upgraded Program in Listings:
+
+
+
+The table and the callouts will update accordingly. You’ll also notice that the User Categorisation is available as the 3rd field in the table.
+
+In order to make this tab more complete, we’ve also explicitly added the count of Active Listings that are Basic Screening, or just Active Basic Screening Bookings, as well as the % over the total. Lastly, and similarly to Bookings, we’ve added an Upgraded Listings Rate which just shows the % of Active Upgraded Listings vs. all Active Listings per account.
+
+Let’s deep dive into the alert #08 - Has Upgraded Bookings - Not all Listings Have Upgraded Program Applied. We’ve just selected this User Categorisation and sorted by Upgraded Listings Rate:
+
+
+
+Let’s focus on the first row: this user has 19 active listings, from which 1 has an Upgraded Program and the rest, 18, have Basic Screening. This results into an Upgraded Listings Rate of 5.3% (1 divided by 19). This user has generated a total of 584 bookings, from which 80 have an upgraded program. However, the remaining 504 bookings are Basic Screening-only, which results into an Upgraded Booking Rate of 13.7%.
+
+To put it simply, these are 504 bookings from which we missed the opportunity to generate any kind of revenue. Worth noting that these can be extended to 14,815 bookings if considering all users in this alert.
+
+### Early conclusions
+
+Early because we also need time to deep-dive... But there’s few interesting cases in which we can already act upon. I’ll be only including accounts that have been in New Dash before June, to remove slight time delays: 2 weeks should be sufficient. I’ll also exclude churned and data quality alerts from the totals, to have a clean picture.
+
+Here’s the highlights:
+
+- On Migrated accounts from Old Dash:
+ - 43% of the migrated accounts have Medium or High Priority alerts (yellow and red). This corresponds to 307 accounts.
+ - **7.3% of the migrated accounts have High Priority alerts**. This corresponds to 53 accounts, a few having No Listings (11), not Active Listings (19), or not Upgraded Programs in Listings (9 without Bookings, 14 with Basic Screening Bookings).
+ - 17% of the Bookings are Basic Screening, the main gap being
+- On the onboarding of New business accounts - i.e. not migrated:
+ - 48% of the new business accounts have Medium or High Priority alerts (yellow and red). This corresponds to 196 accounts.
+ - **20% of the new business accounts have High Priority alerts**. This corresponds to 83 accounts, the majority having No Listings (48), not Active Listings (18), or not Upgraded Programs in Listings (5 without Bookings, 12 with Basic Screening Bookings)
+- On Upgraded Bookings - i.e., that we can generate revenue vs. Basic Screening Bookings (free)
+ - 61% of the Total Bookings come from users tagged as 08 - Has Upgraded Bookings - Not all Listings have Upgraded Program Applied vs. just a 30% that comes from 00 - No Alert.
+ - However, **the impact of the alert #08 is huge in terms of Basic Screening-only Bookings: around ~15K, which explains 3/4 of all Basic Screening Bookings.**
+ - At this stage it’s worth mentioning that part of these bookings might come from automatic listing activation + automatic booking pulling that should be resolved in the coming weeks, as part of the Basic Screening iterations. This just highlights how relevant this piece of work is.
+
+There’s many more insights to reveal with this new categorisation and hopefully we all get more used to it on time!
+
+## Updates on Invoicing & Crediting Report
+
+This week, we worked on updating the Invoicing & Crediting Report. This update aligns with recent conversations we've had with the Finance and Resolutions teams, aiming to help them reduce the time spent searching for specific data or determining which tasks should be prioritized.
+
+With that in mind, we've added two new tabs to the report:
+
+- **Accounts Due Amount:** This tab highlights the accounts with the highest pending payments to Truvi, as well as the accounts with the highest pending payments from Truvi.
+The goal is to give the Finance team a clearer view of which accounts should be prioritized for follow-up and payment actions.
+
+
+
+- **Due Amount Details:** This tab provides a more detailed view of both pending invoices and credit notes. It includes all relevant information for each document, allowing users to dive deeper into the data related to each outstanding payment.
+It also includes a secondary view called *Financial View*, which offers a simplified version of the General View for financial analysis.
+
+
+
+## Screen & Protect API pricing and reporting
+
+Over the past few weeks, internal discussions have focused on the evolving pricing structure of the **Screen & Protect API** and its implications for existing invoicing reports. Given the product's early stage and ongoing changes in pricing, it has been agreed that automating the invoicing process via a new Power BI report is not currently a priority.
+
+As the product continues to develop, and with client volume still relatively low, the decision has been made to **maintain manual invoicing** for now. This allows flexibility while we refine the pricing strategy and gather insights on usage patterns and client demand.
+
+Looking ahead, when **Screen & Protect API** scales enough, with a more stable pricing model and growing adoption. At that point, we’ll revisit the reporting needs and adapt the invoicing report to support the Finance and Product teams accordingly. This approach is expected to serve as a model for invoicing processes for future early-stage products as well.
+
+# 2025-06-06
+
+## Test accounts are now excluded from Power BI reports
+
+Good news! Thanks to a piece of work carried out by Dash Squad, we now have a proper way to identify which accounts are for testing purposes in the production backend of Truvi.
+
+At the moment of writing this, 535 accounts have been identified as test accounts.
+
+A few of these accounts have had activity in terms of Bookings, Listings, Guest Journeys, etc. This was bad because it was affecting many critical reports such as Main KPIs. However, these impacts were mostly localised in the historical years of 2022, and a bit of 2023. Impact on 2024 is reduced, and 2025 almost not existent.
+
+A huge amount of these accounts didn’t have any activity at all but were still appearing in reports such as Account Management reports, that are account based, and in this case we can observe how much of these have been cleaned out.
+
+The work carried out on Data side is to directly exclude these test accounts from the major entities within DWH; meaning we apply this exclusion by default. This will save us precious time since any report we built would already contain the logic to exclude these test accounts.
+
+Lastly, we’re aware that there’s a few additional accounts that look like test accounts that need to be tagged accordingly. We’ll continue raising and flagging these cases to improve our data quality.
+
+## Host Resolutions overrepresentation bug now fixed
+
+A data bug led to Host Resolutions Payments being overrepresented across various reports. This inflated values for related KPIs and metrics, particularly affecting:
+
+- **Business Overview:** Overstated Host Resolution Payouts, understated revenue retained post-resolutions.
+- **Accounting Reports:** Overstated Host Resolution Payments.
+- **Account Management Reports:** Skewed margins and growth data.
+
+The bug caused a **net overstatement of ~£7.2K**, with the majority of the impact (~£6.5K) occurring in the first five days of June.
+
+The issue was fixed on June 5th. For additional details on the incident, please check the Incident Report below:
+
+[20250605-01 - Overrepresentation of Host Resolutions Payments](https://www.notion.so/20250605-01-Overrepresentation-of-Host-Resolutions-Payments-2090446ff9c9804ca74be8bfae70fa64?pvs=21)
+
+## Finance reporting updates
+
+For the past few weeks we have been closely collaborating with the finance team and there have been some conversations about updating some reports like Invoicing & Crediting Report to better accommodate the needs of the finance team as well as creating some new reports like the budget report. During the past week we focused on both of this tasks, doing some work on the current reports, which are under review and will soon be live with the newest changes, as well as dedicating efforts into researching and identifying all necessary data points from Xero, our accounting software, which will form the foundation of these new reports.
+
+We look forward to sharing more detailed news about these ongoing reporting developments as they progress.
+
+# 2025-05-30
+
+## New Confident Stay Report is now live & important changes on Check In Hero reporting
+
+This week we’ve progressed with Confident Stay modelling within DWH. One of the outputs has been the creation of a dedicated Power BI report to track Confident Stay once the product goes live.
+
+
+
+The new Confident Stay - Overview report is accessible through the Guest Insights Power BI app.
+
+Importantly, we’ve also moved to Guest Insights all 3 already existing reports of Check-In Hero. Please, use the reports on this new location as at some point in the future we will decommission the dedicated Check In Hero app.
+
+Now, coming back to Confident Stay:
+
+
+
+At the moment, the report holds no data on Confident Stay being offered or purchased until the product goes live. As such, it’s possible that when we go live we might need to some tweaks. In any case, we’ve already implemented a Guest Journey based conversion funnel, a Host Adoption tracking and last but not least a dedicated tab for Revenue monitoring.
+
+It’s a minimalistic report but should allow to track the very basics.
+
+## New Dashboard Reporting: Booking Check-In and Improvements
+
+This week we’ve also done a few improvements on the New Dashboard Overview report in Power BI.
+
+The main upgrade has been the creation of a new tab called Check In Bookings. This tab allows to understand the composition of Bookings that Check In in a given month in terms of when were these Created in our systems.
+
+Let’s see an example:
+
+
+
+In this case, I’m filtering for Check-Ins happening between May and December 2025; and I’m only considering Bookings with Upgraded Programs (note that the filter “Has Upgraded Services” equals to True). This indicates the bookings that are potentially going to have at least 1 paid service, assuming these don’t get cancelled. This includes both Billable Services and Guest Payment Services, or both.
+
+As we can see in the snapshot, we have 10.1K Bookings with an Upgraded Service in May. Interestingly, 5.5K were actually created within May, while the rest were created in previous months. It’s also interesting to note that in June, 4K bookings were created in May. This is quite close to the 5.5K same-month creation & check-in.
+
+This new tab should allow better comprehension on when revenue linked to bookings can potentially happen, as well as provide a proper tracking on Creation vs. Check In bookings.
+
+Now, moving to more updates! We’ve conducted a large amount of small improvements across the report. The most relevant aspects have been:
+
+- Booking Detail now allows to filter by "Check In Date". It also explicitly displays both Bookings with Upgraded Programs vs. Chargeable Bookings.
+- User Detail now allows to filter by "In New Dash Since" and displays different measures. At the end of the table we now have an explicit count of Basic Screening Bookings (Bookings - Bookings w. Upgraded Program). We’ve also added an Upgraded Bookings Rate that computes for each account the % of Upgraded Bookings vs. the total Bookings. Those accounts that are not close to 100% are worrying, as it means we’re mostly having free bookings.
+
+For those interested in the full list of changes, you can find it in this list:
+
+- **Full list of improvements**
+
+ Readme:
+
+ - Added HubSpot as source.
+
+ Overview:
+
+ - Funnel now displays full integer value
+ - Global Indicator New Dash users now shows the thousand separator
+ - Moved Global Indicator "Total Listings" above "Active Listings"
+ - Global Indicator "Bookings with a Program containing Upgraded Services" renamed to "Bookings with Upgraded Programs"
+ - Global Indicator "Total Active Listings with an Active Program" renamed to "Active Listings with Active Program"
+ - Global Indicator "Total Active Listings with an Active Program containing Upgraded Services" renamed to "Active Listings with Active Upgraded Programs"
+ - Smaller filters, rearrangement, visualisation updates
+
+ User Detail:
+
+ - Renamed any "w. Upgraded Services" to "w. Upgraded Program"
+ - Added between filter "In New Dash Since"
+ - Smaller filters, rearrangement, visualisation updates
+ - Added Callouts to easily identify main totals on filtering
+ - Added new columns Basic Screening Bookings, Basic Screening Bookings (%) and Upgraded Bookings Rate
+
+ Booking Detail:
+
+ - Added between filter "Check In Date"
+ - Added dropdown filter "Booking Id"
+ - Smaller filters, rearrangement, visualisation updates
+ - Added callout "Bookings w. Upgraded Program". This helps identify the difference vs. Chargeable Bookings i.e., those that have already a chargeable invoicing line or a Guest purchase
+ - Added a note to explain difference between Chargeable Bookings and Bookings w. Upgraded Program.
+
+ Services Adoption:
+
+ - Smaller filters, rearrangement, visualisation updates
+
+ Created Services:
+
+ - Smaller filters, rearrangement, visualisation updates
+
+ Chargeable Services
+
+ - Smaller filters, rearrangement, visualisation updates
+ - Excluded Dimension "Has Upgraded Services" since it's the same as "Global"
+ - Forced that values represented in both graphs have chargeable amounts strictly greater than 0, to exclude "null" amounts per dimension value. This fixes the Guest Agreement appearing here.
+
+## Data-Driven Risk Assessment Project resumes
+
+After some weeks working on most priority subjects, this week we’ve resumed the work on the Data-Driven Risk Assessment project (also known as just “data-driven flagging”).
+
+The idea for this project is to understand and build a process that flags bookings based on factors that are purely based on data, with the aim to reduce the amount of resolution incidents and provide a better informed coverage of screening and protection for our hosts.
+
+This is a very exciting yet challenging project, as for a proper process we need to have 1) huge amounts of data 2) with high quality and 3) available at the moment of screening. This forces us to focus on New Dash Bookings and incidents that appear in the Resolution Center, and in both cases we don’t have a massive history.
+
+Despite this challenge, we’ve been analysing the current performance for several weeks now and we’re resuming with the phase 2 of the project: Data Team will invest a bit of time experimenting with Machine Learning models and analysing trends with the aim to have a predictive system from which we can measure the performance before deciding if we aim to implement this in production.
+
+While the output of such a project is unknown and there’s a risk we don’t reach a good model, it’s very likely that in the process we study and obtain insights that will increase Truvi’s knowledge on the relationship of Screening, Protection and Resolution Outcome - the heart of our business model.
+
+## SQL Training in Full Swing for Our Domain Analysts!
+
+Over the past few weeks, our domain analysts have been making great strides in strengthening their data skills. After completing a series of training sessions focused on Excel—where they've become nothing short of masters—we’ve now shifted our focus to SQL.
+
+The team has been actively studying the SQL learning materials we've shared with them, and we’re already seeing strong motivation. To support their growth, we’ve been preparing new practical challenges that will help them better understand how to query data and get familiar with the structure and contents of our data warehouse (DWH).
+
+This hands-on approach is designed to build both confidence and capability, empowering analysts to dive deeper into our data and unlock more insights independently.
+
+# 2025-05-23
+
+## New Dashboard Onboarding report now live
+
+Following [last week investigation on Billable Bookings](Data%20News%207dc6ee1465974e17b0898b41a353b461.md), this week we have released a new report within New Dashboard Reporting application in Power BI: New Dash Onboarding.
+
+This report focuses exclusively on accounts that have not been migrated from Old Dash. The idea behind is to capture the different steps that each new account needs to complete in order to be considered successfully onboarded: from the moment the contract is signed until this account is finally invoiced.
+
+For instance, let’s focus on accounts that have had a contract signed on March 2025:
+
+
+
+New Dash Onboarding - Overview; for contracts signed on March 2025. Snapshot as of 26th May.
+
+We can observe we had 60 contracts signed in March 2025, but only 19 of these have been already invoiced. We’re aware that a natural delay exists due to the invoicing process being a monthly process. However, only 37 out of 60 contracts have generated Bookings with Paid Services, thus it indicates the need to act upon certain accounts to activate and speed up the revenue generation.
+
+Additionally we can see two additional tables. The first one highlights accounts that ONLY have free bookings, and that have not churned. This can be sorted by the Total Bookings or the Avg. Bookings/Day to track the magnitude of the missed opportunity. For instance, `34922353025-Opulent Stays inc` has had 143 Bookings since it was onboarded, all of them with free services. Let’s deep dive on this account in the tab Account Detail:
+
+
+
+New Dash Onboarding - Account Detail, for 34922353025-Opulent Stays inc. Snapshot as of 26th May.
+
+We can see that this account has Upgraded Programs (which contain Paid Services) in 9 listings, and that the first Upgraded Program was set on the 27th March, so 10 days after the contract was signed. However, it still has no Bookings with Paid Services.
+
+This account currently has as active services the Basic Damage Deposit and the Waiver Plus, so it’s possible that still these Guest-facing products need to be purchased in order for a Booking to be considered as having Paid Services. A quick look in the other report of New Dashboard Overview - Booking Detail would help determining which Programs are actually applied to upcoming Bookings.
+
+Additionally, we can observe that this account expressed interest on ID Verification during sales calls, but that this service is not currently active in any active listing. A potential upselling opportunity!
+
+Getting back to the Overview tab, we also have the second table that highlights accounts that have had Paid Bookings in the past but that at this moment, there’s no Active & Upgraded Program at Listing Level. The interesting example in this case is `34610311272-Hipstay Ltd` as it still has 4 active listings. Looking into Account Detail for this account:
+
+
+
+We can easily observe that the Program and Listing Funnels miss the latest - and most important - step. It had 25 Paid Bookings, but moving forward it only has the capacity to generate Free Bookings with Basic Screening. This highlights an interesting case to understand why a client has decided to go for free services when it was not even yet invoiced!
+
+There’s additional functionalities in the report which we encourage client-facing teams to explore by themselves. Keep in mind that data is updated once a day, thus small delays might happen when checking vs. the actual Dashboard configuration.
+
+## Confident Stay revenue now captured in KPIs
+
+This week we’ve modified the Guest Payments related flow in KPIs to ensure to capture Confident Stay revenue once it’s live.
+
+The change affects Guest Revenue, which now includes Confident Stay payments alongside Check In Hero, Deposit Fees and Waiver Fees. This is also affecting Guest Revenue dependants, such as Total Revenue or Revenue Retained Post-Resolutions.
+
+Additionally, we’ve created a new metric called Confident Stay Revenue as a standalone revenue tracking for this new Guest Product. Keep in mind that at the moment this is still zero as the product has not been released yet.
+
+## Guest Journey A/B test: London Wallpaper results
+
+It’s been over a month since we’ve been monitoring the A/B test on the Welcome Page with London background. Due to the changes only applying for guests that travel to London or Great London areas, the sample size we got in this time period is much lower than usual (around 400 Guest Journeys).
+
+Seeing that results are still not significant, meaning we cannot really conclude anything from this A/B test, the A/B test has been stopped on 27th May 2025. There’s an interesting possibility to re-do this A/B test with additional destinations so we increase the sample size and the possibilities to reach to meaningful conclusions. However, this will come later down the line.
+
+The detailed results of the London Wallpaper A/B test are available here:
+
+[2025-05-27 Guest Journey - London Wallpaper A/B Test - Results](https://www.notion.so/2025-05-27-Guest-Journey-London-Wallpaper-A-B-Test-Results-2000446ff9c9800d86f2d3bcfdbbec42?pvs=21)
+
+## Guest Journey A/B test: Your Trip Questionaire has been launched
+
+The next A/B test is called Your Trip Questionaire and is being launched right after the London Wallpaper one, on May 27th 2025.
+
+The Your Trip Questionaire aims to investigate if asking additional questions to the Guest during the Guest Journey process has an effect in the Guest behaviour - in terms of conversion, payments and/or CSAT score.
+
+Since adding additional steps in the Guest Journey might add friction, the A/B test will not have a 50-50% distribution but rather a reduced amount of 10% in the study variation. This will reduce the potential risk while still ensuring that the hypothesis gets studied properly.
+
+Data Team will take a close look in the coming days to ensure that any risk is managed accordingly. In the meantime, here’s the technical details for this brand new A/B test:
+
+[2025-Q2 - 2 - Your Trip Questionaire - Guest Journey A/B test](https://www.notion.so/2025-Q2-2-Your-Trip-Questionaire-Guest-Journey-A-B-test-1f90446ff9c980a296b9ecb47cad21ef?pvs=21)
+
+## Our Domain Analysts are SQL-ing up
+
+Great news! Our current batch of Domain Analysts have done a great job going through some Excel challenges and training and we’re now happy with their skills with it.
+
+This means we now move on to a tougher (but also more juicy) bone: learning SQL. SQL is the language we use to query data from the DWH, so learning it will allow our colleagues to pull all sorts of great stuff out of the DWH. We expect to spend a few weeks with them both upskilling them in SQL and guiding them through navigating and leveraging the contents of our DWH.
+
+Wish them luck: as with all language learning, picking up SQL is going to burn some serious brain calories!
+
+## New currency (AED) in Exchange Rates data
+
+We recently received a request from Ant to cover some features in the Resolution Center: we need to included the United Arabs Dirham (AED) to our database of currency rates.
+
+Since this has been the first time since inception that we had to add a brand new currency to our rates database, we needed to upgrade some parts of our tool `xexe`, which is the code that brings the rates from [XE.com](http://XE.com) into our DWH.
+
+Those are now done and the AED rates history (starting from 2025-01-01) is now present in both the DWH and the Billing DB.
+
+And remember, if you need to add another currency to this list, just let us know!
+
+# 2025-05-16
+
+## Preparing DWH for Confident Stay
+
+This week we’ve resumed the necessary work to properly track and enable future reporting around Guest Products.
+
+We already started to capture the necessary tables [a few weeks ago](Data%20News%207dc6ee1465974e17b0898b41a353b461.md), but now with the coming launch of Stay Disrupt it was about time to continue with the work.
+
+This week we’ve handled a big internal refactor regarding Guest Payments, which it’s the main source of data for key areas such as A/B testing or Guest Revenue computation. In essence, we’ve split the logic to differentiate between Guest Products (Check-In Hero, Stay Disrupt) and what we call Verification Products (Waivers, Deposits, Fees). This split enables us to capture and transform the data according to the different logic each type of product has, while enabling a cleaner environment within DWH.
+
+Lastly, everything gets aggregated together to Guest Journey Payments; which includes any payment done by the Guest within the Guest Journey. Besides combining Verification Products and Guest Products, we apply the conversion of the amounts to without taxes for revenue computation purposes.
+
+This was the biggest challenge on Data side to prepare for Stay Disrupt: mainly, ensuring nothing critical would break! Next step will be ensuring that Check-in Hero reporting is prepared for the incoming changes, before jumping into discussing dedicated Stay Disrupt reporting.
+
+## On New Dash and Billable Bookings
+
+This week we’ve also dedicated quite a bit of effort to understand why April billable bookings have had a lower volume than expected. This is important since we rely on Billable Bookings as for the go-to metric on bookings side as a precursor to invoiced revenue.
+
+After the first investigations, there’s no obvious, single explanation that covers this observed decline. Rather, the decline it’s a combination of small factors; some of which are actual issues that need fixing, others suggest that we need better business methodology in place and others are just because of the different logic to determine when a booking is billable in New Dash, with respect to Old Dash.
+
+This exercise is still in progress. At the moment, in Data side, we’re focusing on providing dedicated account onboarding reporting for those accounts in New Dash that have not been migrated from Old Dash - so that are “new business” directly in New Dash.
+
+# 2025-05-02
+
+## Account Growth report is now live
+
+[Last week](Data%20News%207dc6ee1465974e17b0898b41a353b461.md) we explained that we were working on an improved version of the report Account Managers Overview. This week, we’re happy to announce that the new report Account Growth is live! As a summary, here’s the main key improvements:
+
+- **Growth is now based on Billable Items** (Bookings for Platform, Verifications for APIs).
+- **We forecast the current month**, updating daily - making the report much more timely.
+- **Growth compares the forecast vs. the past 3 months**, focusing on recent trends and reducing seasonality noise.
+- **Impact score now uses Revenue Retained Post-Resolutions**, better aligned with profitability goals.
+
+In terms of visualisation, we kept a similar display as the existing one in the tab Monthly Growth - but now it allows to select the ongoing month and has the different logic and metrics in place. Importantly, we’ve also added 2 new additional tabs:
+
+- **Ongoing Month Overview**: it specifically focus on the ongoing month and highlights any account that is tagged as Major Decline, as well as the top 10 accounts in Revenue Retained Post-Resolutions.
+
+ 
+
+ Double click-me!
+
+- **Account Growth Detail**: it allows to deep-dive into a single account. It displays the historical evolution of the account in terms of growth and revenue metrics, as well as it provides specific context of why the account is labelled with a certain category on the ongoing month.
+
+ 
+
+ Double click-me!
+
+
+With this new report up and running, we’ll proceed to delete the previous Account Managers Overview on May 23rd. Please, if you have any question or concern, contact the Data Team before May 23rd!
+
+## Exploratory Data Analysis on Resolution Incidents
+
+This week we’ve also advanced in the scope of the [Data-Driven Flagging Project](Data%20News%207dc6ee1465974e17b0898b41a353b461.md). We’re waiting for more data to reach significant conclusions on the booking status vs. resolution outcome methodology, as we’re only focusing on New Dash protected bookings.
+
+However, in the meantime, we’ve conducted a high-level analysis on any Resolution Incident that appears in Resolution Center, indistinctly if it’s New or Old Dash. This gives us a bit more data (though not an incredible amount) so we’re able to start exploring main trends.
+
+We’ve focused into 3 areas:
+
+- How does the booking duration (number of nights) affect resolution incident occurrence?
+- How does international/national travellers affect resolution incident occurrence?
+- How does the lead time from booking creation to check-in affect resolution incident occurrence?
+
+These areas are high-level enough for us to retrieve first trends without falling into low sample inconclusiveness. So! Time for you, reader, to make a guess.
+
+
+
+Ready?
+
+Here’s the actual insights obtained from the data analysis!
+
+- **Longer bookings** are more likely to result into incidents.
+- **National and especially same-town travellers** are associated with higher incident rates than international guests.
+- **The time between booking creation and check-in** does **not** significantly influence the likelihood of incidents or payouts.
+- Across all explored factors, **no clear trend** explains when a payout is likely once an incident has been raised - suggesting that a payout resolution depends on other factors.
+
+How many correct guesses? Any surprise? Let us know through slack!
+
+The in-depth analysis is available in this Data paper:
+
+[2025-05-02 Exploratory Data Analysis on Resolution Incidents](https://www.notion.so/2025-05-02-Exploratory-Data-Analysis-on-Resolution-Incidents-1e70446ff9c98043b263e3b2eadb79fb?pvs=21)
+
+## Milestone: Live Deals in New Dash surpass those from Old Dash!
+
+On April 29th 2025, and according to Main KPIs data, New Dash surpassed Old Dash in terms of number of accounts that are live.
+
+This is a great milestone! Since the first release of the MVP at the end of July 2024 to the current date, many initiatives and projects have been circulating around New Dash impacting the wider business.
+
+.png)
+
+Source: Business Overview - Main KPIs - Detail by Category, on 29th April 2025.
+
+Congrats to everyone for this big step forward, we win as a team!
+
+## Update on Host Resolutions Payments Report
+
+Following last week's update from the Finance team, we’ve completed key improvements to the **Host Resolutions Payments Report**.
+
+The report now includes **all Host Resolutions Payments**, regardless of whether they were originally recorded as **bank transactions** or as **credit notes**. This ensures a complete and unified view of all resolution-related payouts.
+
+In addition to the data update, we’ve made the report **more user-friendly** and **insightful**:
+
+- Added new metrics like **Average Payment Amount** and its **evolution over time**
+- Introduced a new **Top Accounts tab**, showing:
+ - The **top 10 accounts by total resolution amount paid**
+ - Accounts with the **most resolution claims**
+ - Those with the **highest average amount per claim**
+
+
+
+These changes aim to provide more visibility and value for both the **Resolutions** and **Finance** teams.
+
+# 2025-04-25
+
+## Account growth limitations and how to improve it
+
+It’s been several months since we first released the report of Account Managers Overview, back in Q4 2024. The main idea behind this report is to quickly categorise the different accounts between Major Gain, Gain, Flat, Decay and Major Decay in terms of how the account growth is impacting our business.
+
+While this has been helpful in these past few months, we’ve also been aware that there’s some limitations with how the growth and impact scores are being computed:
+
+- Computation is based on Total Revenue, Created Bookings and Listings Booked in Month. This works relatively well for Platform deals, in which the 3 metrics make sense. However, for API deals we do not track Bookings nor Listings, thus we only rely on Revenue. This can reduce the balance of the score.
+- Computation is mostly based on Year-on-Year (YoY) and MoM (Month-on-Month) evolution of the abovementioned 3 metrics, and it gets averaged. This results in a simple average of a total of 6 evolutions. This has shown some limitations if the account has been live for a bit more than a year or less, in which an actual MoM decrease is hidden by the fact that we might be comparing the current growth versus an initial start of the account.
+- Information is not very timely… We’re aware that Revenue metrics have an intrinsic delay due to the invoicing cycle. However, Bookings are technically available with a 1 day difference in DWH, while in the report we currently only use the Bookings from the previous month, thus we could provide a greater degree of awareness.
+- Impact score is weighted by Total Revenue, while now we have the ability to use a more margin-related metric: Revenue Retained Post-Resolutions. While Total Revenue is a good indicator, switching to RRPR would provide a better comprehension of how an account would impact directly to our Truvi goals of reaching profitability.
+
+It’s also worth noting that this account growth computation is, by nature, complex. For instance, focusing exclusively on MoM evolution could falsely flag accounts as decay while this effect might just be because of seasonality.
+
+
+
+At this moment we’re exploring a more advanced, refined version of the account growth and impact to the business at account level. In essence, what the new version is providing is:
+
+- We only base the growth on Billable Items. These refer to Billable Bookings for Platform Deals, and to Billable Verifications for API Deals.
+- Billable Items are forecasted at the end of the current month. This means that we’re not only providing information up to yesterday, but rather aiming to predict the monthly Billable Items of the current month. This forecast would change every day and become more accurate as the month progresses. This is critical to be able to detect drastic changes of an account while in the month!
+- The growth is computed only by comparing the current month forecasted Billable Items vs. the actual Billable Items of the past 3 months. This takes into account both the absolute values (for instance, 100 Billable Items vs. 120) as well as the relative share of Billable Items of a given account vs the total (for instance, 0.5% vs. 0.3%). This ensures that we’re able to only base the growth on recent data while ensuring that potential seasonal effects are diluted by taking the relative share.
+- The impact score is based on Revenue Retained Post-Resolutions, rather than Total Revenue.
+
+So far the first tests look successful! The accounts that we flag as Major Decay indeed seem to be having a clear decay which is followed by a posterior negative impact in Revenue metrics when deep-diving into the metrics in the Account Performance report. And… probably the best thing is that now information is as timely as we can get!
+
+This next week we will finalise the tests, do small tweaks here and there and start working on a Power BI report.
+
+## Finance Team Updates How Host Resolution Payments Are Processed
+
+The Finance team is making an important change to how **Host Resolution Payments** are recorded in our systems. This update affects where the information is stored and how it appears in reports, helping us manage and track these payments more accurately.
+
+Until now, Host Resolution Payments were recorded as regular **bank transactions**. Starting from **March 31, 2025**, they will instead be recorded as **credit notes**. This shift brings better alignment with our financial processes and improves how we categorize and report these payments.
+
+While the categorization of payments remains mostly the same, there are a few updates, including a new account code to capture certain types of resolutions more precisely.
+
+To make this transition smooth:
+
+- A **new table** will be created in the DWH to combine both the old and new data sources.
+- Our **reporting models** will be reviewed and updated to make sure the change is properly reflected.
+- Key reports like the **Main KPIs Report** and the **Host Resolutions Report** will be updated accordingly.
+
+Our teams are working closely with Finance to ensure there’s no double counting, that all necessary data fields are captured, and that everything is fully tested before going live.
+
+## Update on A/B Tests: New Illustration and Destination Welcome Page
+
+After almost six weeks, the A/B test for the **New Illustration** has been completed. While no significant positive impact was observed, there were also no negative effects — the main goal of the study. The final decision on whether to roll out the New Illustration now rests with the Guest team.
+
+You can review the full test results [here: [2025-04-15 Guest Journey - New Illustration A/B Test - Results](https://www.notion.so/2025-04-15-Guest-Journey-New-Illustration-A-B-Test-Results-1ba0446ff9c980f2893ede0970611156?pvs=21) ].
+
+### Next steps
+
+We’re launching a new A/B test focused on the **Destination Welcome Page and Message**. In this test, guests will see a **custom wallpaper image and welcome message** tailored to the town where they have booked their stay.
+
+- The initial rollout will target bookings in **London and Greater London**.
+- Based on the results, we may expand the test to include more cities in the future.
+
+You can find all the details about this new A/B test [here: [2025-Q2 - London Wallpaper - Guest Journey A/B test](https://www.notion.so/2025-Q2-London-Wallpaper-Guest-Journey-A-B-test-1d80446ff9c980319eb2c0e97e41be1e?pvs=21) ].
+
+# 2025-04-11
+
+## KPIs refactor finished
+
+Following [last week’s update](Data%20News%207dc6ee1465974e17b0898b41a353b461.md), early this week we’ve finished the refactor on the internal KPIs modelling. This should enable us to provide more flexibility and address further needs on any reporting that requires this information.
+
+At the moment, this KPIs flow is being used mostly for Main KPIs, Account Management and New Dash reporting - so quite central.
+
+Unfortunately, we faced a data quality issue linked to a wrong computation that was released with this refactor. The incident has now been resolved, but for further details you can check the incident page here:
+
+[20250409-01 - Wrong computation on Revenue Retained metrics](https://www.notion.so/20250409-01-Wrong-computation-on-Revenue-Retained-metrics-1d10446ff9c980e0b6d3e52b40879b68?pvs=21)
+
+Thanks again to Kayla for spotting and raising it!
+
+## Account Performance reporting now live
+
+The first interesting piece of work after the [KPIs modelling refactor](Data%20News%207dc6ee1465974e17b0898b41a353b461.md) has been the implementation of the Account Performance report, which sits in Account Management Power BI app.
+
+In short, this is an improved version of the previously existing tabs of Detail by Deal and Deal Comparison in the Main KPIs report. These were not very good in terms of usability and, at the same time, were not providing timely data.
+
+With the new report and thanks to the refactor, we’re now able to deep-dive into several tabs to get insights, see below a few examples:
+
+Display the performance of several metrics of a single account, in a monthly or month-to-date level:
+
+
+
+Display several metrics over time for a single account:
+
+
+
+Display a single metric for a single account year-on-year:
+
+
+
+Compare how different deals perform over time and the % share on a single metric: and here we can select multiple filters, such as Account Manager, Deal Lifecycle State, Billing Country, Business Scope (API/New Dash/Old Dash), Listing Segmentation… even filter by the value of the metric itself:
+
+
+
+… and even check the size and trend of accounts linked to Account Managers, for instance, for New Dash:
+
+
+
+This is a quite significant change and there’s tons of information hidden here. We highly encourage to take a look at the Data Glossary tab for further information. We expect that it will take a bit of time to get to the full potential, so reach out to us if you want a demo!
+
+Last but not least, we aim to remove the old tabs in Main KPIs on Detail by Deal and Deal Comparison on April 25th, as these will become obsolete.
+
+## Keeping our reports updated
+
+Last week, we rolled out several exciting updates across our Power BI reports to make data exploration smoother, faster, and more insightful. Here’s a quick overview of what’s new:
+
+### New Dash Overview: *Adoption Services Tab Added*
+
+The **New Dash Overview** report now includes a brand-new **Adoption Services** tab!
+
+Users can now monitor how adoption rates for different services in New Dash are evolving — whether by **users**, **listings**, or **bookings**. This update makes it easier to track service engagement over time and identify trends or drops in usage.
+
+
+
+### Churn Report: Multi-Month Selection Enabled
+
+We heard your feedback!
+
+The **Churn Report** now supports **multi-month selection**, allowing users to analyze data over broader time periods with just a few clicks. This change helps speed up long-range trend analysis and simplifies comparisons across months.
+
+### API Reports: Fresh Look & More Filters
+
+We’ve also made improvements to a few of our **API-related reports**:
+
+- Cleaned up the design to match our latest visual standards.
+- Added new **filtering options** to give users more flexibility and control over the data they explore.
+
+# 2025-04-04
+
+## **Data Alerts: Keeping Our Data Strong and Reliable**
+
+Over time, the Data team has implemented a system of **data alerts** across all our models to help ensure the accuracy and consistency of the information we provide.
+
+These alerts work quietly in the background, constantly monitoring for anomalies, missing data, or unexpected changes. Their goal is simple: **to make sure our data remains trustworthy and actionable**.
+
+### Why It Matters
+
+In the past few weeks, these alerts have proven especially valuable. We've seen a rise in the number of issues detected, and thanks to the alert system, we've been able to:
+
+- **Identify problems early**
+- **Raise awareness quickly**
+- **Ensure that the data in our reports remains reliable**
+
+This proactive approach helps us maintain confidence in the numbers we use every day.
+
+### A More Resilient Data Ecosystem
+
+Our alert system is a key part of maintaining a resilient and transparent data layer. It allows us to catch and address issues before they impact decision-making—ensuring the data you see is as accurate and up-to-date as possible.
+
+## KPIs refactor in progress
+
+Following the several data needs in the past weeks in terms of KPIs, we ended up building quite a few models within DWH, which has become a quite complex ecosystem.
+
+In order to keep our flow as scalable as possible, we’re conducting a small refactor that will not have any impact on the business users; but that will enable us to maintain and provide new initiatives in the future much easier.
+
+The scope of this refactor is double:
+
+- Ensure that any metric, even if complex, is computed within our KPIs folder, following the Data Team conventions. This was not the case for heavy logic metrics in the scope of Churn Rates and Onboarding MRR.
+- Remove the strict dependency on Account Management reporting from the monthly by deal KPIs. This has served as well for the past months but being realistic, we need more timely data regarding Account performance. Conducting this decoupling enables us to freely adapt the Account Management data flow without impacting other reports.
+
+Additionally, we’re testing a new framework called dbt audit helper that is very helpful to identify changes between model outcomes. This enables us, for instance, to validate that the previous output of a production model is exactly the same as the new proposed refactored model - and if it’s not the same, it will identify which rows have differences. Combining this audit tool with a few other tricks, we’re able to easily - and rapidly - try code while ensuring the output remains the same; even for tables with tens of columns and millions of rows.
+
+## Data-driven flagging project gets a green light
+
+In Truvi, we have visibility on the details of bookings and the incidents that happen in them. For hundreds of thousands of bookings per year.
+
+This puts Truvi in a unique position to *know* (not guess, but actually know) what characteristics of a booking, listings and guests can be early signs of damage risks.
+
+Building data-driven, automatic, risk-assessing models would enable us to deliver great value to our customers and much more performant risk-management policies for our company. Succeeding in this area would boost the P&L by improving our products (more sales, less churn) and by helping us reduce our resolutions payouts (save in protection costs).
+
+This is an idea that has been wandering around the company for some time. Together with Matt, we decided to shape this a bit more seriously and discuss it. The Data team’s proposal (which you can find here: [20250331 - Actuarial Screening Project Proposal](https://www.notion.so/20250331-Actuarial-Screening-Project-Proposal-1c70446ff9c9809f8285eba86ddcbdb2?pvs=21) ) was born, and we’ve agreed to give phase 1 of this project a go.
+
+Our goal for phase 1 is to simply measure the performance of our current flagging system in terms of spotting risk, and running some numbers in terms of how would improvements in flagging drive results for Truvi and our customers. We are committed to complete phase 1, but will decide if we pursue the rest of the project depending on what we find during this first phase.
+
+# 2025-03-28
+
+## Business Targets is now live
+
+As presented recently in the Town Hall, we have been working on a big piece of work to be able to track performance against targets.
+
+This target exercise has been going on for a while, and has finally been materialised in several updates in Main KPIs to be able to track in far more depth the main metrics. In essence, Main KPIs now actually contains KPIs, while before it was mostly containing tons of metrics.
+
+Probably the biggest impact of this piece of work is a simple yet informative Power BI tab in Main KPIs called **Business Targets:**
+
+
+
+In this report we will be able to see the latest status of the main timely drivers of performance for our business, always updated up to yesterday (included).
+
+The main drivers are the following:
+
+- **New Deals**: how many new clients are going live?
+- **Churning Deals**: how many existing clients are offboarding?
+- **Live Deals**: how many clients are currently live?
+- **Host Resolutions Payouts**: how much are we paying out in terms of resolutions?
+- **Billable Bookings**: how many bookings can be billed?
+
+Each of these drivers has a target, displayed in pink in the gauges. For these drivers, we have both a Monthly tracker as well as Year-To-Date (YTD) tracker: the monthly reflects on how much we should achieve (or avoid reaching) in a given month, while the YTD aggregates all prior monthly targets to the current month. It’s important to state that the year considered is the financial year, and not the calendar year. This means that the financial year 2025 ranges from 1st April 2024 to 31st March 2025.
+
+We can also see that there’s an Actual and Projected value. The Actual value is the value observed in the Month-To-Date or Year-To-Date up to the Last Update date. The Projected, on the other hand, is a forecasted value at the end of the month. So, if we focus on New Deals as an example:
+
+
+
+We see that we achieved 69 New Deals from the 1st of March to the 27th of March. We’re aiming to reach to at least 86 by the 31st of March. If we keep the same rate of New Deals constant in the 4 remaining days, how many New Deals would we have by the end of March? So that would be: 69 New Deals divided by 27 days ~2.6 New Deals per day. If we multiply this by 31 days we would get the ~79 New Deals that we’re projecting by the end of the month. In other words, we’re projecting that there’s 10 New Deals that we might potentially bring at this rate. Since this figure of 79 Projected New Deals is below the target of 86, figures are displayed in red, indicating that we’re at risk of not achieving this objective. This can also be observe by noticing that the grey sector does not surpass the pink target line.
+
+For YTD, the idea is the same. The 982 Actual contains all New Deals brought from 1st of April 2024 to the 27th of March 2025. If adding up the Actual the 10 projected New Deals of this month, we will get the 992 Projected New Deals in terms of YTD.
+
+It’s worth noting that a metric being below target can be good in some cases, such as Churning Deals (the less clients that offboard, the better!) or even Host Resolutions Payouts:
+
+
+
+In this case, we observe that in a Monthly level we’re far below the target so figures are green. However, in YTD this figure actually surpasses the target thus being bad. Effectively this means that in the past months we didn’t achieve the targets, as the current one is going very well so far.
+
+For Live Deals however, things are a bit differently. Both the Monthly and YTD is the same value, and we only display the Actual value (we only count clients that are currently live). We cannot directly influence Live Deals, but rather, capture new clients and ensuring the existing ones do not offboard.
+
+Last but not least - we will be sending weekly emails to all Truvi employees for you to have this figures in mind! First delivery is expected for Friday 4th of April.
+
+## New Dash Migration project: automating data extraction
+
+Back in the beginning of the year we supported our AMs colleagues by providing pricing estimates when a user would transition from Old Dash to New Dash, effectively getting the new services and pricing.
+
+Now… this has been outdated for a while, and we were in need to do another update. Since the exercise carried out back in the day was quite complex in terms of Excel formulas, we decided to automate as much as possible the data extraction so we would be able to provide faster updates in the future. This mostly includes computing a big part of the logic directly in SQL, while previously it was found in Excel.
+
+A couple of versions have already gone out and we’re finalising some additional requirements in order to reduce manual workload from AM teams in charge of the migration.
+
+## New Dash reporting updates
+
+As the migration of clients from the old to the new dashboard progresses significantly, our focus has shifted toward **enhancing data depth** and **improving reporting capabilities**.
+
+This week, we've been developing new models to **analyse the adoption and usage of New Dash services** across:
+
+**Account users** – Percentage of users offering the service.
+
+**Listings** – Share of active listings with the service enabled.
+
+**Bookings** – Percentage of bookings where the service has been applied.
+
+By implementing these metrics, we’ll be able to track **historical trends** and answer more questions like:
+
+- *Are more of our customers adopting this service over time?*
+- *Is there any decline in usage that we need to address?*
+
+To complement these insights, we're also introducing **deal lifecycle states** in the **Overview** and **User Details** tabs. This segmentation will allow users to:
+
+- Analyse service adoption based on different deal lifecycle stages.
+- Exclude **churned deals** for cleaner metric numbers.
+
+With these updates, the **New Dashboard Report** is becoming an even more relevant tool for tracking customer engagement and service adoption trends.
+
+## dbt CI pipelines are now protecting your reports
+
+As the Truvi data needs grow and evolve, our DWH keeps on growing more and more complex. With close to 400 tables and a few thousands of columns, working on it is not faint of heart. Because of this, complexity, the risk of making mistakes when developing our DWH is increasing, which can lead to side effects like reports not displaying what it should, or anything at all.
+
+To unload some of this burden from the Data team member shoulders, we’ve recently set up Continuous Integration (CI) pipelines in our code repository. These pipelines are a series of automatic steps that get executed every time a team member wants to modify the code in the DWH. They check several possible mistakes that might be introduced, and act as a traffic light of sorts: if everything is ok, we can go ahead and use the code. If we identify any issue in the pipeline… that’s great! One bug that didn’t slip through.
+
+
+
+Our CI pipeline letting Uri know he hasn’t broken main KPIs… this time.
+
+As many other technicalities, you probably won’t *see* this much… but remember, that everyday you wake up and your report is there, working as always, it’s because of tools like this!
+
+# 2025-03-14
+
+## New Dash reporting small improvements
+
+This week we’ve also continued on the small improvements on New Dashboard reporting in Power BI. We’ve added a small display in the Overview page to show the amount of distinct Deal Ids we have in addition to the existing New Dash User count.
+
+
+
+In theory, in New Dash, a Deal should only be represented by a single User. However, we’re noticing during the migration that this is not always the case, and sometimes, we even get some alerts due to the duplicity of Deals among different users. This small change should provide better overview on data quality for New Dash.
+
+Additionally, we’ve taken the opportunity to do some minor changes such as adding the share of chargeable amount in the Chargeable Services summary table:
+
+
+
+Lastly, we also took the opportunity to update the report with the new Truvi style.
+
+## New Guest Products tables ingested into DWH
+
+This week we’ve also started a new line of work to be able to capture Guest Products in DWH. At the moment, the following new backend tables are in sync:
+
+- guestProduct.DisplayDetail
+- guestProduct.Configuration
+- guestProduct.ConfigurationLevel
+- guestProduct.ConfigurationPricePlan
+- guestProduct.ConfigurationStatus
+- guestProduct.Product
+- dbo.VerificationRequestToGuestProduct
+- dbo.VerificationRequestGuestProductToPayment
+
+The first step into DWH modelling which is staging has also been completed for all the abovementioned tables, and we’re currently deep-diving into the intermediate side. There’s still work to do here, so keep tuned for further updates.
+
+## Anticipating account behaviour: First steps towards account-based created bookings forecasting
+
+It is well known that different metrics have different time availabilities. For instance, within DWH we’re able to provide the amount of bookings created up to yesterday, but we’re not able to do so for total revenue as this depends on the invoicing. Rather, on Main KPIs, we will have the February total revenue figures available on the 20th of March. However, for Account Managers reporting, this will be available on the 1st of April.
+
+While this is important to ensure a certain level of data quality within the reported metrics, we are also in need to anticipate account behaviour, specially to understand potential upcoming impacts and prevent churn.
+
+Ideally, we would like to forecast revenue-related metrics for each account, as this is the most insightful measure on account performance. However, forecasting revenue is… well, complicated. We have different client types, from APIs to Platform, with different dashboards, different applied services and price plans and these can change in different moments in time, etc. So to start with we have started by doing something far more simpler: aiming to predict how many bookings an account would have created at the end of the current month. In many cases, the amount of bookings created is a good indicator of the performance - despite not being perfect - and can help understanding in almost real-time if there’s any change in behaviour at account level.
+
+While account-based forecasting is a clear use-case, keep in mind that we predict this for any category/dimension, since this depends on our KPIs models. For instance:
+
+
+
+Snapshot of the top 12 categories that, as of 17th of March, have had the most created bookings in the current month of March. With the actual figures we’re able to estimate a end of month figure as shown in current_month_projected_created_bookings. How good is this projection? Well, it depends, as shown in the relative_error. As of 17th of March, it seems that Old Dash (1.3%) and Global (3.7%) projected bookings are quite stable, while for instance in New Dash (34%) we’re less certain of how the month will end. This is normal since New Dash is having more and more clients migrated, thus generating more bookings, thus more difficult to predict.
+
+In order to have a forecast at account level, we currently have a very simple model within DWH that aims to project the amount of bookings created at the end of the month by using:
+
+- The average amount of bookings created in the past 7 days, and
+- The average amount of bookings created in the current month-to-date
+
+Intuitively, at the beginning of the month, this projected figure is likely going to be quite inaccurate. However, as the month advances, it’s going to get closer and closer to the final, real figure. In order to provide an estimated measure of by how much we can trust this projection for the current month, we do a simple exercise: we apply the same projection logic for the past 3 months, and compare these previous months forecast vs. the actual end of month figures. From these we are able to get a simple projection error that we can attribute to the current month projection. This is also very important if at some point we want to try other forecasting methods, so we’re able to choose the one that gives the best results.
+
+We’ve also compared a few cases of projected booking decay with actual data and input from our Account Managers colleagues and it seems to work well within reasonable limits - at the end, trying to predict the future is a complex task.
+
+For the time being, this projection logic is still sitting in DWH waiting to be leveraged to the business teams. Keep tuned for further updates!
+
+## Deals lifecycle alignment with HubSpot
+
+We recently identified a discrepancy between the data used in our financial model for Live Deals and the data available in Power BI. The difference became evident when comparing our reporting approach with Alex Anderson's methodology for retrieving and structuring deal data.
+
+To resolve this, we decided to align our entire deal lifecycle with HubSpot, ensuring consistency with the reporting structure Alex has been using. This transition has significantly improved accuracy, bringing reported amounts much closer together.
+
+While this is a step forward, we are still working on filling in some missing data within HubSpot. Additionally, some deals do not exist in HubSpot at all. These will now appear in our reports under the label **"99-NOT IN HUBSPOT"** to clearly indicate their status.
+
+## Successful Excel tips & Tricks Session
+
+Our recent **Excel Tips & Tricks** session was a great success! 🚀
+
+You can find the recording of the session here https://guardhog-my.sharepoint.com/personal/joaquin_ossa_superhog_com/_layouts/15/stream.aspx?id=%2Fpersonal%2Fjoaquin%5Fossa%5Fsuperhog%5Fcom%2FDocuments%2FRecordings%2FExcel%20useful%20tips%20session%2D20250312%5F150535%2DMeeting%20Recording%2Emp4&referrer=StreamWebApp%2EWeb&referrerScenario=AddressBarCopied%2Eview%2E75aed233%2D08f6%2D4ff2%2D8d21%2D6de356fd5450 and the documentation [Excel Tips and Tricks](https://www.notion.so/Excel-Tips-and-Tricks-15e0446ff9c980b59831df76c4d41567?pvs=21)
+
+### **What we covered:**
+
+✅ Most-used shortcut keys for faster navigation
+
+✅ Basic and advanced formulas to boost efficiency
+
+✅ Best practices for cleaner and more effective spreadsheets
+
+We’re also available for more personalized sessions tailored to your team’s needs! If you'd like a **closed session**, we can work through practical problems together to help your team get more comfortable with Excel. **Hands-on learning is the best way to master it!** 📊
+
+# 2025-03-07
+
+## Detecting Hyperline invoices in Xero
+
+This week we’ve carried out a small improvement in our accounting DWH models of Xero to be able to discern which invoices and credit notes come from Hyperline and which are not. The main benefit around this is to ensure that invoices and credit notes are correctly attributed to a month in the same manner as we do for the old documents in Xero. In other words, ensuring that our reporting is consistent.
+
+While at some point in the future we’ll much certainly read invoicing information from Hyperline itself, we will still need to be able to exclude those are copied over to Xero to avoid duplicity on the invoices. Also this ensures that during this transition period in which both Hyperline and Xero are used for invoicing purposes, we have control on DWH to ensure proper reporting capabilities.
+
+## Platform or API deal type now available in Account Managers reporting
+
+A small improvement has been released this past week concerning Account Managers reporting, both for the AM Overview and the AM Margin: now we’re informing whether the Deal is an API client or a Platform client.
+
+
+
+While this is not a massive change from user point of view, it has enabled us to identify these deals in the modelled structure within DWH. This will likely be of massive help in the future when improving KPIs and Account Managers reporting capabilities, as having this distinction means we can implement different logic if needed.
+
+## Truvi rebranding on Power BI ongoing
+
+This week we’ve also started on the Power BI rebranding, which effectively means modifying the background, logo and some colour displays around Power BI to fit with Truvi’s brand.
+
+
+
+The new template in place for Main KPIs
+
+At the moment these reports have already been rebranded:
+
+- Main KPIs
+- Account Managers (all reports)
+- Miscellaneous Reports (Currency exchange and Deal consolidation)
+- Truvi Reporting (previously Superhog Reporting)
+- API reporting (all reports)
+
+While we still have many reports to update, we will do so either when we need to work on them for another reason or when we have a few minutes to spare. Massive thanks to Pedro and Sergio for creating the new Truvi templates for Power BI!
+
+## New Churn Report is now live
+
+We’re excited to announce the release of a new Churn Report in the Account Managers reporting app! Previously, account managers had to extract this data manually from HubSpot, resulting in only basic and essential insights. This new report automates the process and includes additional valuable data to better support the team. It provides insights into the number of accounts that have churned throughout Truvi's history, including their performance and reasons for offboarding.
+
+
+
+We hope this helps account managers better understand the impact of churned accounts and identify ways to enhance support for our current clients.
+
+[Churn Report - Power BI](https://app.powerbi.com/groups/me/apps/bb1a782f-cccc-4427-ab1a-efc207d49b62/reports/d4955aad-1550-46c7-9549-2bdeebb99286/ReportSectionddc493aece54c925670a?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi)
+
+## New Dash fixes and improvements
+
+Last week, we made several updates and fixes to the New Dash Overview report. We introduced the new Guest Agreement service, a free offering for some client bookings as part of their program.
+
+Additionally, we fixed a couple of bugs—special thanks to Alex’s sharp eye for spotting these! One issue involved displaying inactive PMS accounts, while another affected the correct computation of bookings with guest services.
+
+A big shout-out to Alex for helping us catch these! We encourage everyone to stay vigilant and report any potential errors in our reports. While we strive to minimize mistakes through thorough reviews within the data team, occasional slips can happen, and your feedback is invaluable in ensuring accuracy.
+
+# 2025-02-28
+
+## Resolutions Incidents Report
+
+Now that we have access to Resolution's data, we’re excited to introduce a new report designed to generate valuable insights from this information.
+
+Our first delivery will be the Resolution Incidents Report, which provides a comprehensive overview of incidents reported by hosts. This report will include agent performance metrics (insights into case handling and decision-making) host incident trends (number of incidents, claims, and resolutions) and detailed incident breakdown with a complete view of each case, including compensation amounts and resolution processes.
+
+This report will help us better understand trends, improve decision-making, and optimize our resolution process.
+
+## Upcoming Excel Training Session
+
+Are you ready to take your Excel skills to the next level? Join us for an upcoming **Excel training session** next Wednesday March 12th where we'll cover key practices to improve efficiency, essential **shortcut keys**, and powerful **formulas** to make your work easier and more effective.
+
+### **What will we see?**
+
+- **Time-Saving Shortcuts** – Master the most useful keyboard shortcuts for faster navigation and data handling.
+- **Essential Formulas** – Discover formulas that simplify complex calculations and analysis.
+- **Recommended Best Practices** – Learn how to structure and manage your data efficiently.
+
+Whether you’re a beginner looking to improve your workflow or an experienced user wanting to refine your skills, this session will provide valuable insights to **work smarter, not harder** in Excel.
+
+Stay tuned for the session details—we look forward to seeing you there!
+
+## Towards FY2026: Main KPIs update
+
+Tracking metric performance against targets is crucial as we approach the start of the new financial year 2026, in April 2025. To support this, we have updated the Power BI report **Main KPIs**, incorporating three new tabs designed to enhance visibility and analysis of key business metrics.
+
+
+
+New tabs available in Business Overview - Main KPIs report
+
+The **Main KPIs** report now features:
+
+- **Main KPIs Overview** – A revenue and payouts-focused tab, providing insights into the evolution, contribution, and impacts of these figures on Month-to-Date (MTD) and Year-to-Date (YTD) performance. This high-level view ensures a quick grasp of financial health.
+- **KPIs Tracking** – A holistic snapshot of performance against targets for all metrics within a single month. This tab enables a comprehensive understanding of where we stand across different key indicators.
+- **KPI Detail** – A deep-dive view for individual metrics, comparing performance against targets (both MTD and YTD), as well as achievement rates relative to End-of-Financial-Year (EOFY) goals. Additionally, this tab features a graphical display that visualizes trends over time.
+
+
+
+With these enhancements, teams will now be able to track and analyze performance with greater clarity and precision. Whether assessing overall revenue impact, evaluating specific KPI progress, or identifying trends over time, the new tabs on **Main KPIs** report serves as a powerful tool for data-driven decision-making.
+
+# 2025-02-21
+
+## Resolutions Data ingested into DWH
+
+This week we’ve reached a new milestone: Resolution Center data is now ingested in DWH.
+
+We have a new integration with the Incidents container on CosmosDB that contains Resolutions data, which will enable us in the future to gain further understanding on Claims and enrich our reporting. After a few days, we confirm that the integration between CosmosDB and DWH for this Resolutions stream is working nicely.
+
+At this stage the information is mostly stored in a raw manner, as it comes from Cosmos DB. In the following days we’ll start modelling the data internally to enable reporting and analytical use-cases.
+
+## Screen & Protect Invoice Report
+
+Over the past few weeks, we have been developing new models to streamline the invoicing process for our latest API products, such as Screen & Protect and Check-in Hero. These models are designed to improve efficiency and accuracy in our billing system.
+
+Now we have finished working on the **Screen & Protect API invoice report, t**his report provides a detailed breakdown of all API transactions, ensuring transparency and accuracy in billing. It captures key data points such as transaction counts, associated costs (depending on the protection plan for each client), making it easier to validate and reconcile invoices.
+
+## New Active PMS Report
+
+As part of our ongoing efforts to enhance Superhog reporting in Power BI, we have now completed the final update—**the Active PMS Report**.
+
+This report provides **detailed insights into active Property Management Systems (PMS) within Truvi**, including:
+
+**Accounts with Active PMS** – A clear overview of all accounts using a PMS
+
+**Number of Listings** – Tracking the number of properties associated with each PMS
+
+**Bookings Generated** – Insights into the volume of bookings coming from each system
+
+The **Active PMS Report** is a valuable tool for monitoring partner activity and understanding the impact of different PMS platforms on our business.
+
+## Billable Bookings metric adaptations
+
+Following the inclusion of the [new Business Scope category](Data%20News%207dc6ee1465974e17b0898b41a353b461.md) in Main KPIs, we’ve reworked how Billable Bookings are considered depending on the business scope - Old Dash, New Dash & APIs.
+
+We have now 2 metrics:
+
+- Est. Billable Bookings: this is mostly a continuation of our previous metric. This takes into account in an estimated manner WHEN a booking is supposed to be Billed. For Old Dash, this follows the previous logic that depends on the Price Plan of the user. For New Dash, this is assumed to be on the first date that any billable service of that booking can be billed. This means that if a New Dash booking has several billable services that are billed in different timeframes, we will assume the booking to be billed within the first billable date. This metric is timely, but might not be 100% accurate depending on the use-case, since attributing bookings to a billable time is opinionated.
+- Billable Check Out Bookings: while it’s complex to say when a Booking is billable, it’s much easier to assess if a Booking is Billable or not. This is why we can attribute the concept of a Booking being billable once all possible services have been billed; this is, at the end of the lifecycle of the booking: at check out date. This is exactly what this metric on Billable Check Out Bookings does, in the sense that it only captures Billable Bookings at a given Check Out date. This metric is more accurate, but it’s not timely.
+
+For APIs so far we don’t have any content on Bookings since we still need to work out internally the implementation.
+
+For further information, we encourage you to check the Data Glossary which contains the definitions of both metrics.
+
+# 2025-02-14
+
+## Data News update: previous history moved to a separate Notion page
+
+After several entries in the Data News, we reached a point in which opening the Notion page or adding new content has become painfully slow - this is how busy we are in the Data Team to deliver new content!
+
+In short, we’ve cleaned the Data News page. The previous content, from the 1st of December 2023 to the 7th of February 2025, is available in this separated Notion if ever you need to get back to it:
+
+→ [Data News - From 1st Dec 2023 to 7th Feb 2025](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md)
+
+
+
+What a milestone!
+
+## Main KPIs new category: By Business Scope
+
+We’ve been busy [these past few days](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) in order to deliver a very insightful feature: allowing the possibility to split most of the metrics in Main KPIs by Old Dash, New Dash or APIs.
+
+This feature is now live and is accessible as a new Category, named Business Scope.
+
+The possible values for this category are the following:
+
+- **Old Dash:** platform clients that generate activity (bookings, revenue, etc) under Old Dashboard
+- **New Dash:** platform clients that generate activity (bookings, revenue, etc) under New Dashboard
+- **APIs:** APIs clients such as Guesty, E-Deposit, etc.
+
+
+
+Contribution to Total Revenue per Business Scope categories on December 2024.
+
+The source to categorise between APIs and Platform is based on Hubspot Deals, similarly as we already did for Onboardings/Offboardings/other Hubspot data.
+
+However, the split between Old Dashboard and New Dashboard is a bit more complex, because the same Deal can be in both New Dash and Old Dash in different moments in time. For New Dash, we mostly rely either on:
+
+- **If a Booking comes from New Dash**: from here we can retrieve many metrics, such as Bookings, Guest Journeys, Guest Payments, etc. This should be relatively accurate.
+- **The moment in time in which users move from Old Dash to New Dash:** we only rely on this if we do not have any other more accurate way to handle the split. This affects metrics that are Deal-dependant, such as Deal metrics, Listing metrics, etc. and also affects Xero-related metrics such as Invoiced Revenue and Resolution Payments. In these cases, we attribute these as New Dash once the user has moved from Old Dash to New Dash. This might not be 100% accurate specifically on the transition month, since a user can still be invoiced for Old Dash concepts while already in New Dash. Still though it can be seen as a good approximation and as more users distance themselves from such a transition, this split should become more and more accurate.
+
+Lastly, there’s also some users that are excluded from this categorisation, such as KYG Lite users. These were originally in Old Dash and now are currently in New Dash, since these were the first to move during the New Dash MVP. However these users have been excluded from a while in New Dash reporting since… well, technically, we do not care about them. Therefore, in order to keep consistency between New Dash reporting and Main KPIs, these KYG Lite users are NOT included in any category.
+
+
+
+Double click me to display properly. Example of some New Dash metrics in the MTD tab up to 16th February 2025
+
+The majority of the metrics are already available for this new category. The main exceptions are Estimated MRR and Churn metrics, in which we need to deep-dive to understand the feasibilities, and Billable Bookings, in which we have a different definition that needs implementation before making it available.
+
+## Main KPIs new metric: Live Deals
+
+[Last week we presented the first Year-To-Date Overview tab](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), which was stated as a first draft. Currently we have gathered the requirements on what needs to be included and how it is supposed to be presented. While handling this, we discovered we actually needed another metric to define Deals that have been onboarded and that have not churned.
+
+This metric is now available under the name of Live Deals. This corresponds to Deals that are New, Never Booked, Active or Reactivated - so, in short, any Deal state that has not churned.
+
+
+
+Example of Live Deals since July 2024 split by Business Scope. The Pink line represents New Dash accounts, and we see how these have been increasing specially since December. Note that at the moment of this screenshot, there’s still remaining days for February so likely the amount of Deals migrated will increase in the following days.
+
+A very interesting use case for this metric can be to follow the migration of users from Old Dash to New Dash. Linking this metric to the previously announced category “By Business Scope”, we are able to track over time the number of accounts that effectively are in each scope with one day delay.
+
+Lastly, a small data quality subject to be considered: currently we do not have a proper way to exclude test accounts, thus is likely that the amount of Live Deals reported is slightly over estimated. A similar thing is happening to the rest of Deal metrics.
+
+## New Dash service adoption is now live
+
+This week we’ve also [continued on New Dash reporting improvements](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), with the goal being to provide more insights on the service adoption on New Dash.
+
+We’ve released a new tab in New Dash Reporting Power BI report named Offered Services. This tab allows to understand, for a given service, the amount of users, listings and bookings that have this serviced offered.
+
+
+
+Example of Offered Services tab for the service Protection Pro
+
+This is further split by the status: in the case of Bookings, the status of the service itself. In the case of Listings, whether these listings have a program applied containing this service, while not being inactive. Lastly, for users, these are considered active for a given service if at least these have an active listing with a program containing that service.
+
+While this is a first step towards understanding the adoption of New Dash services, there’s likely many more possibilities that we can explore in order to improve the coverage. Still this provides a first basic understanding of the general adoption per New Dash service.
+
+## Superhog reporting back alive
+
+In the scope of [re-working Superhog reporting](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’re glad to announce that now the reports of Listings, Bookings and Payments have been fully migrated under DWH scope, in which Data team has control.
+
+We also re-deployed the PMS report. This one is still reading from the Backend, but conversation is ongoing to see if we can replicate it from DWH similarly as we did for the previous reports.
+
+# Prior Data News History
+
+Apparently Notion cannot handle well massive pages! If at some point you need to recover the previous Data News history, you can access it through this link:
+
+[Data News - From 1st Dec 2023 to 7th Feb 2025](Data%20News%207dc6ee1465974e17b0898b41a353b461/Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md)
\ No newline at end of file
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md
new file mode 100644
index 0000000..2291af9
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md
@@ -0,0 +1,2695 @@
+# Data News - From 1st Dec 2023 to 7th Feb 2025
+
+A place to communicate progress, achievements, updates and generally any new thing from the Data Team. Come regularly to stay updated.
+
+# 2025-02-07
+
+## Booking Fees per Billable Booking decrease analysis
+
+Our latest analysis explored the decline in Booking Fees per Billable Booking, which dropped from ~6 ****GBP (mid-2023) to ~3 GBP (late 2024). Initially, this trend raised concerns about a structural decrease in invoiced revenue per booking. However, after a detailed investigation, we identified key factors behind the drop:
+
+- **Guardhog Booking Fees removal:** A major portion of the decline was due to the elimination of Guardhog-related fees in June 2024, which previously inflated the metric. Adjusting for this, the perceived downward trend largely disappears.
+- **Temporary invoicing issue:** A billing error in Nov–Dec 2024 resulted in missing fees, later corrected in January 2025. This was an isolated event with no lasting impact.
+
+Additionally, we explored a potential impact coming from cancellations. However, these rates remained stable, thus this hypothesis has been discarded.
+
+.png)
+
+Booking Fees per Billable Booking. The dashed line corresponds to the original observed values, in which we can observe an almost consistent decrease. The solid line corresponds to the corrected values once taking into account Guardhog Booking Fees and the invoicing incident late 2024.
+
+Once these factors were accounted for, the initial structural revenue drop is largely minimised. The remaining variations can be, potentially, linked to seasonal trends due to client mix. However, we will continue monitoring trends to ensure a solid understanding of long-term movements.
+
+The in-depth analysis can be found in this Data Paper:
+
+[2025-02-04 Booking Fees per Billable Booking Decrease ](https://www.notion.so/2025-02-04-Booking-Fees-per-Billable-Booking-Decrease-1840446ff9c980588958c56a8b600d47?pvs=21)
+
+## Reworking Cancelled Bookings in Main KPIs
+
+There has been quite a bit of misconceptions recently on the impact that Cancellations can have into our business. When a drop in certain metrics is found, a first hypothesis that usually comes into the table is the potential raise of Cancelled Bookings that could explain the situation.
+
+The fact that in Main KPIs we computed Cancelled Bookings and were always showing a big increase YoY with an always red label didn’t help either. The reality is that this metric of Cancelled Bookings was not accurate in how it was attributed to a date. Whenever we built it first, we assumed that once a Booking gets Cancelled, the record of that Booking couldn’t have further updates. However, this has proven to not be true, thus the metric was not reliable on time, and there was always chances that Bookings that were already Cancelled a while ago could be re-attributed in the future thus always showing big increases in the recent days of the MTD tab in Main KPIs.
+
+When we were doing the analysis of Booking Fees per Billable Booking Decrease, we explored the potential impact on Cancellations on that specific problem. Knowing that we couldn’t purely rely on the metric Cancelled Bookings, but that we were able to rely on the status itself, we created some cancellations rates attributed to the check-out date: and surprise, despite the metric is not completely stable over time, there’s was no real feeling of an increase in cancellations.
+
+This is why we’ve decided to rework Cancelled Bookings in Main KPIs following a similar approach. In essence, we have dropped the previous Cancelled Bookings metric and created:
+
+- **Cancelled Created Bookings** → Bookings that are cancelled, attributed to when the Booking is created.
+- **Cancelled Check Out Bookings** → Bookings that are cancelled, attributed to when the Booking is completed (check-out date).
+
+With this, we’re able to also compute:
+
+- **Created Bookings (Excl. Cancelled)** → Total Created Bookings indistinctly of the state minus Cancelled Created Bookings
+- **Check Out Bookings (Excl. Cancelled)** → Total Check Out Bookings indistinctly of the state minus Cancelled Check Out Bookings
+
+And with these metrics we can compute effective cancellation rates as:
+
+- **Created Booking Cancellation Rate** → Cancelled Created Bookings divided by Total Created Bookings.
+- **Check Out Booking Cancellation Rate** → Cancelled Check Out Bookings divided by Total Created Bookings.
+
+
+
+Check Out Booking Cancellation Rate (as a %) per month, split by year, from Main KPIs - Global Evolution over Time
+
+These metrics are definitely much more accurate to follow. However, a very important note: Created Bookings can get cancelled at any point until these have Checked Out. This means that the initial cancellation rates closer to Creation time will always be lower than the final figure once these are completed. So we recommend checking this metric with caution.
+
+Lastly, these new metrics have the same categories available as most of the metrics, meaning we can deep-dive into By # of Listings segmentation and By Billing Country. Below a few examples:
+
+
+
+Check Out Booking Cancellation Rate (as a %) over time, by # of Listings segmentation. We see how small clients (01-05) are actually increasing their cancellation rate over time while bigger clients have overall less cancellation rate.
+
+
+
+Check Out Booking Cancellation Rate (as a %) over time, by Billing Country, only considering the 2 most important countries: USA and GBR (UK).
+
+## Main KPIs first draft on YTD is now available
+
+It’s been a while since we started building Main KPIs back in June 2024. In this period of time, there has been a massive increase in the number of metrics that we’re tracking and, well, it’s starting to be a bit overwhelming to get fast insights if you do not know exactly what you need to look for. Are all metrics important? Is there a few ones that might be representative of the overall business?
+
+In this sense, we’ve been preparing internally a first draft of the metrics that we, at Data side, check the most. The idea is to present these in an overview tab so it’s easy to understand and more welcoming. We’ve decided that for the first draft we’d take a look at the evolution of these metrics in a Year To Date, and here’s the result:
+
+
+
+A real example of how the tab looks like for a given segment and year that are not made available. Each metric shows in a callout the YTD value of the selected year. Below, in PY YTD we have the same value observed in YTD for the previous year. Additionally, we have the difference between YTD and PY YTD both in absolute and relative ways. Lastly, depending on the type of the metric and the increment/decrement, figures will show automatically in Green if it’s going well, Red if it’s going bad and Black if there’s no data to compare against.
+
+In the top line, we have the 3 main indicators: Total Revenue (income generated), Revenue Retained (deducting Host Takehome amounts, so, Damage Host-Waiver Payments) and Revenue Retained Post-Resolutions (deducting both Waiver Payments and Host Resolutions Payments).
+
+These 3 indicators are a direct sum of the second line: Total Revenue is composed by adding Guest Revenue, Invoiced Operator Revenue and Invoiced APIs Revenue. If adding Damage Host-Waiver Payments we get Revenue Retained, and if also adding Host Resolutions Payments we get Revenue Retained Post-Resolutions.
+
+The reality though is that many of these metrics are Invoicing-dependant, thus information is not timely. The only exception so far is Guest Revenue, that depends on the Guest Payments from the Backend.
+
+This is why we’ve also added a third line, that aims to represent if we should expect more or less growth and sustainability based on purely timely metrics. These are the following:
+
+- **Avg. Deals Booked per Month**: in short, average number of clients that are active per month.
+- **Total Churning Deals**: amount of clients that have offboarded in the whole year to date.
+- **Avg. Listings Booked per Month**: average number of listings that are actively generating bookings per month.
+- **Check Out Bookings (Excl. Cancelled)**: Total amount of Bookings that have Checked Out in the whole year to date, that have not been cancelled.
+- **Guest Journeys Completed**: Total amount of verification requests processes that have been completed.
+
+Now, while this is already available, we consider this as a first draft. The idea would be to gather your feedback on what should be really important to track and discuss the best way to represent it. So we’re expecting changes in the coming days - don’t take this first snapshot for granted.
+
+Lastly, for those that want to follow this in a daily basis, now we have the possibility to get this in a daily (or weekly, or monthly) email by subscribing to this page and setting the Year filter on 2025.
+
+
+
+Example of the e-mail Uri received on Feb 9th with the latest update on this tab.
+
+Feel free to contact us to help you setting it up in this regard. Looking forward your feedback!
+
+## Work in progress on metric split per Dash source
+
+This week we’ve started to work on the allowing the possibility to split the majority of the metrics available in Main KPIs by type of Dashboard. This means, for instance, being able to track the Bookings that are coming from Old Dash and those from New Dash separately.
+
+The idea is to have this as a new Category, alongside the By # of Listings segment and By Billing Country categories.
+
+This piece of work is currently work in progress and is still not available in Main KPIs. Once we have it it will allow to properly track the New Dash migration and understand any impact due to the migration.
+
+## Updating Superhog Production reports
+
+Over the past few weeks, some of our reports faced connection issues, preventing them from updating. Surprisingly, no one raised concerns, leading us to believe they were no longer in use. This prompted a discussion about discontinuing all reports in Superhog Production.
+
+To test this assumption, we began deleting them one by one—only to discover that several teams still rely on these reports! In response, we shifted our focus to fixing the connection problems and ensuring all reports were up to date.
+
+### Progress Update:
+
+✅ **Listings Report** – Completed, providing detailed data on all listings.
+
+✅ **Bookings Report** – Completed, delivering comprehensive booking insights.
+
+🔄 **Payment Report** – Work in progress. After discussing tax and waiver fee calculations with Finance, we are finalizing updates to this last report.
+
+We're nearly there—thanks for your patience! 🚀
+
+## Work in progress on new features on New Dash services
+
+Last week, we began working on improving the visibility of services offered by New Dash users across their listings and bookings. This initiative was driven by requests from several stakeholders seeking greater clarity on service adoption.
+
+### **User Adoption per Service**
+
+Understanding which users have specific services configured is key. Our goal is to adapt the **User Adoption Funnel**, which currently focuses on “has any upgraded service,” to instead track “has a certain service.”
+
+### **Adoption Breakdown:**
+
+📌 **Total Users** – All users in the system
+
+📌 **Users with a Service in a Program (Bundle)** – Users who have at least one service in a bundle
+
+📌 **Users with a Service in a Listing** – Users whose bundle services are applied to their listings
+
+📌 **Users with a Service in a Booking** – Users whose services are applied at the booking level
+
+With these insights, we aim to provide better tracking and analysis of New Dash services, helping teams make data-driven decisions. Stay tuned for updates!
+
+# 2025-01-31
+
+## January Invoicing Incident mitigated
+
+[Last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) we explained that one of the first clear insights on the decrease of Booking Fees analysis resulted on discovering that we were missing invoices for some clients on the period of November and December 2024.
+
+Early this week we’ve managed to mitigate the incident by implementing a fix and post some late invoices to the affected clients, which results into a 99.1% potential revenue recovery due to the incident.
+
+At the moment we’re waiting for the generation of January exports to double check - again - that this fix looks consistent and after a post-mortem we will resolve the incident.
+
+For in-depth details of the incident, please refer to the dedicated [incident report](https://www.notion.so/20250124-01-Booking-invoicing-incident-1880446ff9c9803fb830f8de24d97ebb?pvs=21).
+
+Lastly, while it’s clear that this incident had an impact on the decrease of Booking Fees, it does not explain the whole story. This is why after the incident resolution this analysis will resume.
+
+## How to achieve +4.5% increase in Guest Revenue?
+
+A while back, in Q4 2024, we explained in several Data News entries that in collaboration with the Guest Squad we launched and monitored a new A/B test on the Guest Journey to enhance Truvi’s ability to take more informed decisions based on actual results.
+
+In this first A/B test we went for a simple approach, just to ensure that the overall process - from actual implementation, to results monitoring - was working as expected. We were not really expecting good nor bad results, since the whole point of the test was understanding the impact of the position of the Continue button on the payment page within the Guest Journey.
+
+And… well, after several weeks we’re happy to announce that this A/B test has been successful and - surprisingly - the new version achieved +4.5% increase in Guest Revenue and +3.1% increase in Payment Rate!
+
+The detailed results are available in this [Notion page](https://www.notion.so/2025-01-20-Guest-Journey-Floating-Button-A-B-Test-Results-17e0446ff9c9809ca94ecafd79fb6db1?pvs=21).
+
+
+
+The new version was rolled-out to all Guest Journeys on Thursday 23rd of January, and this week we had a retrospective on how we can improve even further the collaboration between Guest Squad and Data in the A/B testing process: because we want to launch many more of them!
+
+Congrats to the Guest Squad for these amazing results!
+
+## Data Request workflow update
+
+This week we noticed that our Data Engineer Pablo was tagged as Data Captain… while he’s still off!
+
+We’ve updated the Data Captain workflow so for this period the role is only rotatory between Joaquín and Uri.
+
+Additionally, we’ve been doing some small improvements on our Data Request workflow to improve usability and gather the needs more efficiently on our side… as well as better flagging the urgency of the requests.
+
+
+
+Kind reminder that we encourage the use of the Data Request workflow for any request you might have since this helps us better prioritising and working more efficiently with controlled interruptions.
+
+## Further Xero reporting automation improvements
+
+The [invoicing incident](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) has brought to the surface that we might not be in a strong position to detect invoicing issues in a timely manner. Simultaneously, a few weeks ago we were carrying out an exercise with Finance to improve how we tracked revenue streams from Xero, for KPIs purpose.
+
+With both topics quite fresh, we’re currently exploring simple ways to provide more interesting insights around Xero that could help identifying potential issues on invoicing side as well as improve efficiency on Finance side.
+
+In the reporting of Invoicing & Crediting, we now have a couple of new tabs.
+
+The first one is Revenue Monthly Trends. This allows to aggregate Invoices and Credit Notes per different revenue aggregations in different levels. For instance, Guest Screening and Protection can be seen as an overall category with multiple sub-categories within, such as Booking Fees, Listing Fees, etc. There’s currently 3 levels of aggregations in place.
+
+
+
+Snapshot of Revenue Monthly Trends for December 2024. We can observe that there’s multiple levels of aggregation around Revenue - currently 2 displayed - but these can be further expanded to a 3rd level. For instance, Screening Services and Protection Services that refer to New Dash services can be expanded to have the revenue detail per service, i.e., Screening Plus, Protection Pro, etc.
+
+With this information, we’re able to provide a monthly overview of the main revenue lines, and compare it versus the previous month (MoM), the same month but from the previous year (YoY) and even retrieve the cumulative Year-to-Date figures and compare it vs. the Prior YTD. Note though that this YTD is based on the Financial year, meaning it starts on April and finishes on March.
+
+Lastly, we’ve also allowed the possibility to filter by Deal in this tab. However, a more interesting tab for account-based use cases might be the Invoiced Revenue per Deal, that compares on a MoM basis the invoiced revenue for that Deal, as well as retrieving how much a certain account contributes to the total invoiced revenue in a given month.
+
+
+
+November 2024 snapshot on Invoiced Revenue per Deal. You see the 3rd and 5th rows, that are blank for Current Month Share (%) and have a -100% in MoM(%)? These are the 2 accounts that raised the alarms for the Invoicing Incident. The main difference is that while before we needed to deep-dive into analytics to reach to this conclusion, with this new report it will be far easier to detect similar issues in the future.
+
+This is specially useful to detect any issue on invoicing, as well us further understanding the accounts that contribute more to Invoiced Revenue and based on which revenue lines, since the same aggregation levels exist in this tab.
+
+## Update on Main KPIs report
+
+This week, we have implemented several important updates to our Main KPIs Report to improve the accuracy and reliability of key business metrics. These changes focus on refining our Expected MRR calculation, updating display rules for invoicing-related metrics, and adjusting the deals lifecycle states to align better with our reporting logic.
+
+1. Updated Expected MRR Calculation
+The Expected MRR metrics are now calculated using the total revenue from the previous 12 months, up to the month before the one being displayed.
+This new approach provides a more updated estimate for Expected MRR, particularly for the month prior to the current one. Previously, the estimation method was not as responsive to recent revenue trends. By incorporating the last 12 months of revenue, the metric now reflects a more accurate and timely projection.
+2. Adjusted Display Rules for Invoicing-Related Metrics
+We have modified the display rules for all invoicing-related metrics based on feedback from the Finance Team. Previously, these metrics were not displayed for the current or previous month, leading to some delays in visibility. The Finance Team clarified that the invoicing cycle for the previous month is typically finalized around the 15th of the current month. To ensure data accuracy and avoid premature reporting, we now display these metrics only after the 20th, allowing ample time for the invoicing process to be completed.
+Before the 20th of the current month → Revenue metrics for both the current and previous months are hidden.
+After the 20th of the current month → Revenue metrics for the previous month become visible.
+3. Removal of 'First Time Booked' from Deals Lifecycle States
+The 'First Time Booked' state has been removed from the deals lifecycle states.
+After internal discussions, we found that this lifecycle state was causing issues with MRR metrics calculations, especially for new deals that received bookings within the same month of their creation. To maintain the consistency and reliability of our MRR reporting, we decided to remove this state from the report.
+
+These changes were made to improve the usability of the Main KPIs Report, ensuring it better reflects real-time business performance while aligning with our invoicing processes.
+If you have any questions or feedback, please feel free to reach out!
+
+# 2025-01-24
+
+## From Booking Fees per Billable Booking decrease analysis…
+
+This week we’ve invested a bit of brainpower to try to understand the decreasing trend of the Booking Fees per Billable Booking over the past few months. This trend can be observed for several months already and reached the lowest figures around end of 2024.
+
+Several hypothesis have been formulated and are being investigated, namely:
+
+- Is a potential increase on Cancellations reducing the Booking Fees / Booking Fees per Billable Booking?
+- Is the changes of Price Plans the main reason behind this decrease?
+- Is the churning of clients with high booking fees the potential reason behind this decrease?
+
+At this stage, data shows that the overall trend of cancellations is quite stable over time and does not correlate to a decrease on Booking Fees. However, this is at an overall level, so we also investigated at per client basis. At per client basis, we checked the biggest clients in terms of contribution to Booking Fees and correlated it to Cancellations but again, doesn’t seem linked to it.
+
+In short, it’s very reasonable to assume that Cancellations are not a cause for decrease on Booking Fees per Billable Booking, nor a decrease to Booking Fees, nor a decrease to Billable Bookings.
+
+However, when deep-diving into the per client basis, we’ve noticed that 2 of the main contributors to Booking Fees were not invoiced since October 2024. After confirmation from Finance side, we’ve raised an invoicing incident.
+
+While at this stage it’s clear that the incident on the invoices can partially explain the decrease on Booking Fees during the last 2 months, there might be other reasons that need to be investigated to explain the overall decaying trend, specially prior to November 2024. However, the shift in priority at this stage has changed to the invoicing incident - the analysis will resume later on.
+
+## … to a new invoicing incident
+
+As discussed in the previous entry, the effort on understanding the decrease of Booking Fees per Billable Booking has highlighted that something has been odd in the invoices for the last couple of months. Since the effects seem very localised, it was difficult to spot since only a few clients were affected.
+
+In short, the issue seems to be related to an undesired effect from the agreed fix on invoicing from our previous incident, as explained in the Data News entry back to November: [Fixes for invoicing have been agreed](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md)
+
+At this stage, the incident is still ongoing. There’s a potential fix being discussed as well as follow up steps on how we can recover the missing revenue from the affected clients. We’re also working on documenting the incident. This is our main priority at the moment at Data side, thus other lines of work might be slowed down until resolved.
+
+Lastly, this emphasizes the need for better visualising and alerting on a per-month and per-client invoicing to ensure that any deviation from the usual can be tracked properly. There’s already a few ideas on the table that could help in this regard, mostly on further Xero automated reporting.
+
+## New Metrics in Main KPIs
+
+To provide deeper insights into our performance and better understand the dynamics of host resolutions and revenue retention, we’ve introduced three new metrics to the Main KPIs report. These metrics are designed to track key aspects of our operations and help us monitor their evolution over time.
+
+- **Revenue Retained Rate:** Ratio of Revenue Retained divided by Total Revenue
+- **Revenue Retained Post-Resolutions Rate:** Ratio of Revenue Retained Post-Resolutions payments divided by Total Revenue
+- **Host Resolutions Payment Rate:** Ratio of Resolutions Payment Count divided by Created Bookings
+- **Host Resolutions Amount Paid per Booking Created:** Resolutions Amount Paid divided by Created Bookings
+
+By tracking these metrics over time, we expect to better understand changes in guest behaviour, operational processes, or market conditions that impact resolutions and revenue retention.
+
+## PMS Data in New Dash & Acc. Manager’s reports
+
+As part of our ongoing efforts to improve the quality and usability of our reports, we’ve introduced new information regarding Property Management Systems (PMS) to **both the New Dash and the Account Managers reports**. This addition provides users with enhanced filtering capabilities and greater visibility into PMS associations. Now Account Managers can easily segment their accounts based on PMS usage, enabling more targeted strategies and conversations with clients.
+This new feature reflects our commitment to continually work and improve our data tools and delivering more actionable insights. We’re confident that this update will enhance your ability to make informed decisions and maximize your impact.
+
+# 2025-01-17
+
+## Pablo goes on paternity leave
+
+Pablo here.
+
+My little girl will be arriving some time next Monday/Tuesday, and I will be off for some weeks on paternity leave to take here of her and her mother.
+
+Don’t despair! You’re in great hands with Uri and Joaquín. We’ve worked hard to make sure my absence is barely noticed, and the guys will do a great job as always (actually, sometimes they’ve done such great things while I’m out on holidays that it makes me wonder if I’m just bothering and dragging them down when I’m around).
+
+For any data topics, please keep relying on Uri and Joaquín, or just summon the @data-captain tag on slack and they will come to the rescue.
+
+You can expect to see me connected again on mid March.
+
+Wishing you the best of luck while I’m out, and see you soon!
+
+## New Onboarding MRR metric released in Main KPIs
+
+We have added a new metric to our Main KPIs Report: **Expected Onboarding MRR.** The Onboarding MRR is a key metric that estimates the expected monthly revenue from each new deal. It is calculated by taking the total revenue generated by all active accounts over the last 12 months and dividing it by the number of active months for each account. This approach allows for a more accurate and dynamic understanding of revenue expectations during the onboarding phase depending on the amount of listings they have.
+
+The addition of the Expected Onboarding MRR metric to our Main KPIs report enables us to do a little forecasting and strategic planning. By understanding the expected revenue from new accounts, we might be able to have more realistic financial goals.
+
+All the information regarding this new metric is available in our report [Main KPIs - Power BI](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi)
+
+## Revenue metrics improvements in progress
+
+After [last week’s inclusion of Revenue Retained metrics](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we’ve been discussing further with Finance in order to understand and minimise the gaps on KPIs Revenue reporting.
+
+Long story short, we’re refactoring the way we retrieve invoiced revenue lines, the one that comes from Xero. Historically we’ve been using the Item Codes that are manually tagged in the invoices and credit note line items for Booking, Listing and Verification fees, as well as the Waiver amount that we pay back to the Hosts; while we were using Accounting codes for other metrics, such as the revenue that comes from APIs. After several discussions with Finance, we’ve reached to the conclusion that in order to minimise the gap it’s better to use the Accounting codes - and indeed, an ad-hoc analysis shows that this change should reduce the gap quite a bit.
+
+This will allow us during this week to include several improvements, namely:
+
+- Reduce the gap between KPIs and the P&L on Listing, Booking, Verification Fees Revenue
+- Reduce the gap between KPIs and the P&L on the Damage Host-Waiver Payments
+- Start tracking New Dash Invoiced Services (Waiver Pro, Protection Pro, Protection Plus, Id Verification, Screening Plus, Sex Offenders Check)
+- Split Guesty Revenue contribution between 1) the Athena API Booking Fees and 2) the Guesty Resolutions revenue line. However, we will still have discrepancies on Guesty due to the accrued revenue that we’re not able to track at the moment.
+- Investigate any other revenue line after the first batch of changes is complete
+
+Important note:
+
+> **This change will inevitably modify the revenue aggregations, namely Invoiced Operator Revenue, Total Revenue, Revenue Retained, Revenue Retained Post-Resolutions, etc. However, these figures should become more accurate.**
+>
+
+This change comes with a drawback, however. Since P&L data starts being available on April 2022, we will need to cut all KPIs start date to that moment in time in order to have consistent and more accurate data. This should not be troublesome - in reality, this historical data is quite inaccurate by nature.
+
+We can expect during this week changes on the figures shown in KPIs and Account Managers report. If you have any question or notice anything out of the ordinary, please feel free to reach to us!
+
+# 2025-01-10
+
+## Support on New Pricing automation
+
+This week we’ve been working closely on the New Pricing initiative to help understand which clients can be migrated from old pricing models to a new structure. Our role has focused on providing detailed data analysis to support this transition.
+
+By analysing client behavior and the different price structures, we’ve helped by providing expected differences in price as well as segmenting the different clients depending on their impact to the business and the capacity or not to move to the new structure. This is now in the hands of RevOps team to do some data-informed actions.
+
+## Account Margin report is now live
+
+[Last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) we explained that we were working to automate the net revenue after the different payouts. Early this week we’ve released a new report in the Account Managers Reporting Power BI app that contains very detailed data per account and time window on the different revenue inputs and outputs, to end up with a kind of client gross margin (without taking into account any operational or fix cost allocation). This is interesting in the sense that it transforms the concept of which accounts are more important from Total Revenue to something more impactful to Truvi’s financial health: we can have very big accounts in terms of Total Revenue, but these can contribute less than smaller accounts depending on the Host takes Waiver payouts and/or Resolutions Payouts. See example below:
+
+
+
+This is a real, handpicked, illustrative (and anonymised) example.
+
+We have 3 accounts ordered by the contribution of Total Revenue. At first glance, we could say that the first account is contributing more in Total Revenue - which is true. However, the picture changes if we deduct the Host Takehome (Waiver amount paid back to host). In this case, we see how the huge majority of the first account revenue is actually paid back to the Host, and we only retain 29.7% (Revenue Retained Ratio). Now we would say that it’s the 2nd account that is actually contributing more. But, what if we deduct also the Host Resolutions Payouts? In this case, it’s actually the 3rd account that is contributing to more “net revenue”, as we can observe in Revenue Retained Post-Resolutions. This illustrative example shows the different angles from which we can have different conclusions depending on the business problem we aim to solve.
+
+During this week we’ve also included a few improvements, such as the ratio between Resolution Payments per Bookings, to help Resolutions team, as well as the Active PMS for each account.
+
+## Revenue Retained metrics now available in Main KPIs
+
+Very much [in line with the previous Data News entry](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve taken the opportunity to propagate 2 new Revenue Retained and Revenue Retained Post-Resolutions metrics into Main KPIs. This will allow for better comprehension in the Global revenue retention, as well as providing the capacity to analyse trends over time and by the different segments we already have in place. Additionally, the metrics have been made available into the detail by Deal tabs to have the month-by-month information in a handy place.
+
+
+
+Example of the MTD tab of how the 2 new metrics compare to the existing Total Revenue. Notice how the growth conception changes: in this example, even though we’re paying Waivers back to Hosts and we have some Resolution Payments, the Revenue Retained Post-Resolutions is increasing more in relative terms than Revenue Retained, and even higher with respect to Total Revenue.
+
+In order to improve the look and feel, we’ve also re-arranged the list of metrics so Total Revenue, Revenue Retained Post-Resolutions are the first ones to appear. And we’ve added conditional formatting to easily see if an account is trending up or down in any of the metrics
+
+
+
+It’s not very visible in Notion though so we encourage you to go check Main KPIs!
+
+Also - have you noticed the brand new Truvi style Pink for Deal Lifecycle State?
+
+## Deals Consolidation Report is now live
+
+We have the new Id Deal consolidation report up and running. With the new consolidation report, you can now easily view all deal IDs alongside their associated names from Core, Xero, and Hubspot, whenever they exist in those systems. This consolidated view provides a clear and comprehensive overview, helping to ensure transparency and accuracy in our data.
+
+This report is designed to support all teams, so if you need access please reach out to the data team so we can help you with any needed permissions.
+
+https://app.powerbi.com/groups/me/apps/10c41ce2-3ca8-4499-a42c-8321a3dce94b/reports/eb744b2d-3f96-41dd-97df-2608daf638f3/3aa7d31cf3fa60fccec5?experience=power-bi
+
+We’ve added the new Deals Consolidation Report together with the previously existing Currency Exchange report. These are now located within a Power BI app named Miscellaneous Reports.
+
+
+
+## API Invoice advances
+
+Over the past few weeks, we have been developing new models to streamline the invoicing process for our latest API products, such as Screen & Protect and Check-in Hero. These models are designed to improve efficiency and accuracy in our billing system.
+
+Once finalized, this work will allow in the future to be integrated seamlessly with the new Hyperline billing platform. This integration aims to automate the entire billing process, making it more reliable and easier for the finance team to manage.
+
+# 2025-01-03
+
+## APIs Deals now visible in Main KPIs and Account Managers reports
+
+This week we’ve been working on increasing KPIs quality. One of the main pain points we have whenever we want to report data is the fact that different sources might contain different levels of completeness. This was the case for Deal Id, for instance, in which a Deal can appear in the backend and not in Hubspot and vice versa. Same story with Xero.
+
+Whenever we started creating the Main KPIs reports around June 2024, we only had data ingested from Xero and the backend, and basically the source of truth for KPIs were those Deals that appeared in the backend. However, no API deals are present in the source table we’re using: only platform users with Deal were reported.
+
+With the ingestion of Hubspot data, we’ve refined the list of Deals that are considered for KPIs reporting. In essence, we consider either:
+
+- Deals from the backend, as we did before
+- Deals from Hubspot that have gone live at any point in time (that are not in Guardhog pipeline)
+
+With this change we’re now able to track Main KPIs by Deal for APIs, such as Guesty. This also is propagated towards Account Managers reporting.
+
+Lastly, another important change. The name associated to the Deal has also changed and now we’re using as a source of truth the name coming from Hubspot. If and only if the Deal does not appear in Hubspot, the previous crazy-computation-logic to retrieve a name from the backend remains. This also affects Main KPIs and AMs reporting.
+
+
+
+Example of Guesty eDeposit account now appearing in Account Managers reporting
+
+> *Keep in mind that Global figures in Main KPIs remain unaffected by this change.*
+>
+
+This small line of work should improve the quality on the KPIs. We’re still aware that there’s many fake Deals that need to be removed at some point, lacking a proper source of truth. One step at a time!
+
+## Automating client “net” revenue post-payouts
+
+> Disclaimer: we might switch to a better, proper naming.
+>
+
+Long story short, it’s been several months since we’re able to report Total Revenue at account level. We’re also able to track the Waiver amount paid back to hosts. And we know from Xero the amount on Host Resolutions that we pay out per client so… let’s combine it all together to have some monetary metrics of the “”“real””” monetary value each client represents for Truvi.
+
+That’s a bit the idea behind this initiative! And also the main reason why we prioritised including API deals in the KPIs flow as [mentioned before](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+
+
+Based on a true story
+
+We’re currently finalising a new report to enrich the monitoring of our accounts for RevOps teams to use it, and hopefully bring further data-driven decision making on their day-to-day responsibilities. Pending naming alignment, these metrics will also be made available soon in Main KPIs - we’d just need to wait a bit.
+
+## Deal consolidation report
+
+At Superhog, our deal information comes from three different sources: Core, Xero, and Hubspot. Each of these systems provides valuable data, but it can be challenging to verify where each deal ID originates, especially when information overlaps across sources.
+
+To streamline this process, we are currently working on a consolidated report that will display all deal IDs along with their associated names from each of these sources—Core, Xero, and Hubspot—whenever they exist in the respective system. This report will provide a clear and quick overview of where each deal's information comes from, ensuring transparency and accuracy.
+
+The primary aim of this report is to support teams, particularly the finance team, by allowing them to quickly access and verify deal data from the relevant source. This will not only save time but also improve the accuracy of financial and operational decision-making.
+
+# 2024-12-27
+
+## New Dash reporting improvements
+
+This week we’ve dedicated a bit of time to improve the reporting on the adoption of Services in the New Dash. Specifically, we’ve finalised the modelling to properly track services that require Guest Payments (Basic Waiver, Waiver Plus and Basic Damage Deposit) and fix the computation of Waiver Pro price by taking into account the booking number of nights. With these 2 improvements, now we’re in a much more accurate spot to track the estimated revenue coming from New Dash in the form of Chargeable Services.
+
+Additionally, we’ve created a new tab in the Power BI report to track Booking Details. This includes applied programs, types of services (Screening, Deposit Management, Protection), booking status, check-in, check-out, etc. We’ve also taken the opportunity to revamp the already existing tabs on User Detail to provide more meaningful insights, as well as reworking naming and UX across the reporting.
+
+
+
+The first row that represents 68% of all New Dash bookings, and 86% of the total New Dash bookings with upgraded services (i.e., that are not Basic Screening) corresponds to Home to Host, which is clearly dominating the adoption of New Dash.
+
+And it’s nice to see that Chargeable Amounts are increasing quite a bit lately, reaching up to 1.7k GBP in the past week!
+
+# 2024-12-20
+
+## Data Team wishes you a wonderful holiday season!
+
+As we approach the end of the year, we want to share our team's availability over the holiday period. Below is a quick guide to which members of the Data Team will be available in the upcoming days:
+
+- Friday 20th Dec: Joaquin & Pablo
+- Monday 23rd Dec: Joaquin & Pablo
+- Tuesday 24th Dec: Joaquin & Pablo
+- Wednesday 25th Dec: National Holidays
+- Thursday 26th Dec: National Holidays
+- Friday 27th Dec: Uri
+- Monday 30th Dec: All team
+- Tuesday 31st Dec: Joaquin & Uri
+- Wednesday 1st Jan: National Holidays
+- Thursday 2nd Jan: Joaquin & Uri
+- Friday 3rd Jan: Joaquin & Uri
+- Monday 6th Jan: National Holidays
+
+
+
+Thank you for a fantastic year, and we’re excited to reconnect in the new year. Wishing you all joy, rest, and a bright start to 2025!
+
+## Power BI User Class
+
+On Wednesday, we hosted an informative Power BI user class aimed at empowering our team to make the most out of this powerful data analytics tool. The session was packed with valuable insights and hands-on learning opportunities, guiding attendees through three key areas: navigation, basic usage, and advanced tools.
+
+To ensure ongoing learning and support, the following resources are available:
+
+- [**Power BI Documentation](https://www.notion.so/Power-BI-users-Tips-Tricks-1510446ff9c98056ad77ead40eef2c45?pvs=21):** A complete guide covering everything discussed in the session and more.
+- [**Slack Data Channel](https://superhogteam.slack.com/archives/C06GFGHJD7H):** Our data team is always available to answer questions or provide assistance.
+
+**Missed the Class?**
+
+Don’t worry! A recording of the session is available [here](https://guardhog-my.sharepoint.com/:v:/g/personal/joaquin_ossa_superhog_com/EXaErNTuspFAolaTgWSdLVAB-_OWKUsyzoITweyBFHCyuA?e=uSc4l7&nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJTdHJlYW1XZWJBcHAiLCJyZWZlcnJhbFZpZXciOiJTaGFyZURpYWxvZy1MaW5rIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXcifX0%3D). We encourage everyone to watch it and explore the resources to enhance their Power BI experience.
+We hope this session inspired confidence and curiosity to explore Power BI’s capabilities.
+
+## How to make more revenue? Analysis on Payment Validation Rate decrease
+
+[Last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) we explained that we were carrying out an in-depth analysis on the decrease of the Payment Validation Rate. After a few final validations, we’ve wrapped up and shared our conclusions with the stakeholders.
+
+In short, we’ve gathered a few data-driven actionable items for RevOps teams that could help bringing more revenue by increasing the Payment Validation Rate of certain accounts. The fully detailed analysis can be found here:
+
+- [2024-12-16 Payment Validation Rate Decrease](https://www.notion.so/2024-12-16-Payment-Validation-Rate-Decrease-462917ff9de0403392d6a35f9a3e3d85?pvs=21)
+
+## Setting the ground for a new KPI: Onboarding MRR
+
+There has been several discussions in the latest weeks on the need of being able to measure, somehow, the average monthly revenue a new account could potentially bring to our business at the moment of onboarding.
+
+While this kind of predictive approaches with few data are usually a challenge, we’ve dedicated a bit of time to explore a set of alternatives to be able to quantify the degree of discrepancy we would have if creating this new KPI. The detailed analysis can be found here:
+
+- [Onboarding MRR Definition](https://www.notion.so/Onboarding-MRR-Definition-f1bada4ea5b942568d5c6b2c7917fc5c?pvs=21)
+
+At this stage we’ve shared our conclusions and recommendations with the stakeholders and once we have feedback we’ll continue working towards the new Onboarding MRR KPI implementation.
+
+# 2024-12-13
+
+## (Another) incident survived
+
+This week we experienced an incident in our infrastructure that brought a lot of our daily data integration and processing jobs to a halt. Fortunately, we caught the incident on minute 0 and were able to put a remedy to it rather fast, so you probably didn’t even get time to notice it in our PBI reports.
+
+ You can read more details on the incident here: [20241211-01 - DWH scheduled execution has not been launched](https://www.notion.so/20241211-01-DWH-scheduled-execution-has-not-been-launched-1590446ff9c9806086e0ec77336d4c51?pvs=21)
+
+## Screen and Protect & CheckIn Hero API reports
+
+After integrating Screen and Protect data into our DWH last week, we have now added CheckIn Hero API data from a new Cosmos DB container. With the data fully modelled in the DWH, we have launched two new reports, one for each system. These reports are designed to give the API team a clear view of system activity and performance while offering the flexibility to dive deeper into specific details about guests, users or key metrics. Both these reports are available in the Power BI API repository https://app.powerbi.com/groups/me/apps/043c0aec-20b8-4318-9751-f7164b3634ad/reports/a19a4491-8576-4109-b3ca-9e26d67d7b03/ReportSectionbd92a560d1aa856ba993?experience=power-bi.
+
+
+
+## Upcoming Power BI User Class
+
+Next week on Wednesday, we’ll be hosting a Power BI user class designed to help master the fundamentals of working with Power BI dashboards. Whether you’re a seasoned user or just getting started, this session is packed with practical insights and tips to enhance the user experience with this powerful tool.
+
+Here’s what we’ll cover during the session:
+
+- **How to navigate and interact with Power BI dashboards effectively:** Learn to explore and understand key insights quickly and efficiently.
+- **Tips for making the most of reports:** Discover hidden features and best practices to maximize the value of your dashboards.
+- **Basics of filtering, slicing, and exporting data:** Gain hands-on experience with essential tools for customizing and sharing data views.
+
+We’re looking forward to seeing our fellow coworkers there to learn, collaborate, and grow their Power BI skills. Don’t miss this opportunity to unlock the full potential of Power BI!
+
+## A/B test has been launched
+
+After a [successful validation on the A/A Guest Journey test](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week the Guest Squad released the real A/B test. The purpose of this A/B test is to understand if showing a Product Selection button always visible (independently of the scrolling) increases the Guest Revenue, rather than keeping the button locked on the bottom of the screen - which is the current behaviour.
+
+So far, with just 5 days of data, the results are still not statistical significant thus we cannot conclude that one version is better or worse than the other. If you’re interested in following the results, we invite you to join the open slack channel [#ab-test-guest-journey](https://superhogteam.slack.com/archives/C083V5Q7K7W).
+
+Great job Guest Squad!
+
+## Analysis Payment Validation Rate decrease
+
+This week we’ve also dedicated some time to do an in-depth analysis on Payment Validation Rate decrease. In essence, we’re observing from different data sources (Mixpanel, DWH) that the Rate of Guest Journeys that offer Payment Validation is decreasing in these past months, which could be linked to a potential revenue loss in the Guest Journey. In other words, we could have gained more revenue from our guests. The main hypothesis that we’re currently investigating and quantifying are whether 1) Churned clients offered more Guest Journeys with Payment Validation, 2) New clients are offering less Guest Journeys with Payment Validation and/or 3) Existing clients (not new, not churned) have changed the behaviour and are now offering less Payment Validation than before.
+
+At this stage the analysis is mostly complete and we aim to wrap up and share it in the following days.
+
+# 2024-12-06
+
+## Screen and Protect data integrated in DWH
+
+This week we’ve integrated a new Cosmos DB container for Screen and Protect API service. With the help of the APIs Squad we managed to integrate the data through our tool Anaxi into our DWH. With this new data available, we started modelling new tables within DWH and are now able to advance on the dedicated reporting for Screen and Protect.
+
+We’ve also started discussions to handle a similar integration for the Check-In Hero API service, since we have the first client onboarding on it.
+
+## Screen and Protect API reporting advances
+
+With the successful integration of Screen and Protect data into our DWH, we are now developing a new report. This report will provide easy access to detailed information for each verification request generated by the new API, along with an overview of performance over time. Its goal is to enhance decision-making and empower the API team with a comprehensive view of the new metrics aligned with their objectives.
+
+## Athena API migration
+
+This week we’ve also handled the migration of Athena API, [similarly as we did a few weeks ago for the e-deposit migration](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md). In essence, the APIs team has a new Cosmos DB instance in which they wanted to migrate the Athena (Guesty) records. Since we already had some reporting dependencies for Athena, we needed to coordinate between the 2 teams to avoid any downtime.
+
+Everything went quite smoothly and since Wednesday afternoon reports have been reading from the new stream without any interruption. Good job, APIs Squad!
+
+## A/A test has been launched… and is looking good!
+
+Great news: after a few technical alignments, discussions, and some extra work due to Data needs, Guest Squad finally launched the A/A test this past Tuesday, 3rd of December!
+
+The purpose of this A/A test is simple: ensure that everything works as expected before the real A/B test. “Everything” is a broad word, but in essence, it means ensuring that we can randomly split the Guest Journey traffic into any desired variation that will be affected by a given production-ready configuration.
+
+Effectively, in this A/A test, the Guest Journey traffic is redirected into 2 different setups that… well… are exactly the same. Since these are the same, we should expect no difference for any metric when comparing one variation versus the other. However, **this will never be true** because we have some uncontrolled effects : maybe a guest selects a Waiver, and another one a Deposit. What we are really expecting is that there’s **no statistically significant difference** between these 2 variations.
+
+That’s why in the Data team we started analysing and implementing a minimal tracking that takes into account statistical analysis. For instance, here’s the results extracted on Thursday morning:
+
+
+
+All metrics have the label ‘(not significant)’ meaning that we cannot conclude that one variation is better or worse than the other. For information, we go for a level of confidence of 95% - as the usual business standard.
+
+We can see that PAYMENT RATE is +6% greater in terms of relative increment for variation B with respect of variation A. However, since the statistical test concludes that this difference is not significant, **we cannot conclude that variation B is better than A in achieving a better payment rate**. In essence, the statistical analysis will ensure that we will take proper data-driven conclusions when running the real A/B test.
+
+In general, the more data we have, the more diluted the random/uncontrolled effects will be and thus the more certain we will be in the conclusions we extract from a given A/B test. This effectively means that the A/B test needs to run for a certain amount of time before taking any decision. If we’re not certain on what business decision to take, we can always keep it running longer to get even more samples. In the screenshot above, we just had a couple of days of data with around ~1.5k Guest Journeys - and this is clearly not enough!
+
+On Monday morning we will extract the latest results and if we observe that metrics for this A/A test are still not showing any significant differences, we will conclude that the setup is correct - and then we’ll be ready to go for the real A/B test! Exciting!
+
+# 2024-11-29
+
+## New Dash reporting improvements: Chargeable Services
+
+This week we have implemented a new tab in New Dash reporting to account for when New Dash services are expected to be charged. This part of the report reads from the new Billing tables of the backend, and we expect that these are updated accordingly to be reliable and the single source of truth. At the moment, the data displayed is not qualitative enough and should not be trusted until the source tables are correctly and fully updated.
+
+There will be some additional work needed from Data side to ensure that we are able to track Guest Payments within this report. However, at the moment we do not have any New Dash user with Guest Payments related services, and thus it will be done in a later step.
+
+## Guest KPIs report improvements: Billing country and robust testing
+
+The Guest KPIs report has received two significant updates aimed at improving usability and data reliability.
+
+First, we’ve introduced a **billing country dimension**, allowing users to analyse metrics with a country-specific focus. This addition empowers teams to gain deeper insights into region-based trends, helping drive more informed decisions.
+
+Secondly, we've implemented advanced testing mechanisms within our data warehouse. These tests automatically flag extreme data deviations, ensuring potential anomalies are quickly identified and investigated. By proactively monitoring the data, we aim to maintain the highest quality and reliability in our reporting.
+
+# 2024-11-22
+
+## New Dash reporting improvements: Created Services Evolution
+
+This week we have been working on providing some visibility on the kind of services (protection, screening and deposit management) that New Dash users apply to their Bookings.
+
+In order to do so, we have created a new tab in the New Dashboard Overview Power BI reporting to track a new metric called Created Services. Created Services stands for the moment in time a given service was created within a Booking that comes from a user in New Dash. Even though the moment of creation of the service could be similar in many cases as the moment the Booking it’s created, this does not necessarily need to be always true. Thus, so far, all services are attributed to the moment these are applied to the Booking.
+
+
+
+Detail by Service Created. Basic Screening is still dominating, even though Basic Protection follows closely and we have some Waiver Pro.
+
+We have different time granularities (Daily, Weekly, Monthly) and different Dimensions: By Service, By Deal, By Has Upgraded Service, etc. When playing with the different selectors, the table and the graphs will update accordingly. You can also click on the different lines of the table to specifically just track what the values that you’re interested in.
+
+
+
+
+
+In the detail By Deal, we can see that the majority of Created Services correspond to a single account, which is Home to Host.
+
+Lastly, we conducted an important cleaning: any New Dash user that has been migrated that does NOT have a Deal id filled will be excluded from the reporting. This is mostly to avoid tracking test/fake accounts, and focus on the real accounts. Therefore, the adoption funnel and the global indicators have been adjusted accordingly.
+
+There’s still many more improvements that we want to apply in this reporting, such as Revenue reporting or a more detailed view per user/service.
+
+## New Dash/New Pricing modelling within DWH continues
+
+This week we’ve continued with the internal DWH modelling of New Dash and New Pricing scopes. Part of it it’s already being used in the latest reporting improvements on New Dash, [as mentioned before](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+At this stage, we started integrating the data contained in the new Billing tables to be able to compute expected Revenue at Service and Booking level. Even though the skeleton for these models exist, there’s still need to backfill the data to have consistency, before we aim to enhance the reporting.
+
+## New Guest KPIs Report updates on the way
+
+Exciting improvements are already in development for the Guest KPIs Report. A new dimension incorporating billing countries will soon provide enhanced segmentation capabilities, offering deeper insights into guest behaviours across different regions. Additionally, robust data quality measures are being implemented, including an outlier detection test. This feature aims to flag potential anomalies in data, ensuring reliability and enabling proactive issue resolution.
+
+Outlier detection has proven highly beneficial in other models, enhancing data trustworthiness and operational efficiency. By bringing this functionality to the Guest KPIs report, the team continues to set a high standard for data excellence.
+
+## Data Team Empowers Colleagues
+
+The data team is taking strides to enhance company-wide data accessibility and expertise by expanding its own knowledge and toolset while enabling others to make the most of the data warehouse. This initiative includes training sessions tailored for members of other teams, equipping them with the skills and permissions needed to access and utilize the data warehouse efficiently.
+
+By providing easier access to critical data, team members across various departments can independently find the insights they need to make informed decisions and helping others in their data request when in need of someone of more expertise on the subject. This initiative reflects the data team’s commitment to fostering a culture of data-driven decision-making throughout the organization.
+
+## CIH Reporting incident
+
+Last week we experienced a new incident around CheckIn Hero reporting. The incident caused our reporting figures for CheckIn Hero sales and revenue to be inflated for some time during last Tuesday, but thankfully we managed to mitigate it on the same day.
+
+You can read the details of the incident here: [20241119-01 - CheckIn Cover multi-price problem (again)](https://www.notion.so/20241119-01-CheckIn-Cover-multi-price-problem-again-1430446ff9c98088b547dfb0baff6024?pvs=21). This incident was somewhat of a repeat of this one from months ago: [20240619-01 - CheckIn Cover multi-price problem](https://www.notion.so/20240619-01-CheckIn-Cover-multi-price-problem-fabd174c34324292963ea52bb921203f?pvs=21) . They both stem from the rather awkward backend design of CIH prices.
+
+The tech team is already working on the issues that triggered this incident (you can check for progress here: [https://guardhog.visualstudio.com/Superhog/_workitems/edit/24505](https://guardhog.visualstudio.com/Superhog/_workitems/edit/24505))
+
+## dbt docs available for all analysts
+
+Our Datawarehouse is where all the good magic of the Data team takes place. But it is not the most welcoming place to onboard into: with hundreds of different tables and hundreds of millions of records, navigating its contents can feel daunting. Our Domain Analysts are experiencing this pain for the first time right now!
+
+To alleviate this, we’ve started to host our DWH table documentation in a web version. All analysts have access to it now.
+
+
+
+Our hard-earned documentation, ready to help colleagues in despair.
+
+With these docs, Jamie and Alex will be able to better find their way around all the stuff in the DWH. And it’s also a convenient tool for the data team members!
+
+# 2024-11-15
+
+## Fixes for invoicing have been agreed
+
+Following up on [the incident we experienced last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we sat down with stakeholders to agree on a final fix for the root cause of the incident. This had to included many areas (Product, Finance, Tech) since the required changes had both business and technical implications.
+
+We managed to agree on the necessary changes and now have a clear way forward (you can find the details here: [Fixing the invoicing incident](https://www.notion.so/Fixing-the-invoicing-incident-13d0446ff9c98056a65bc3676a34873c?pvs=21)).
+
+The next steps are to apply the agree changes on our invoicing exports tool, `sh-invoicing`, which the Data Team will take care of.
+
+## We have consensus on currency rates architecture
+
+This week we had the chance to sit down with the Lead devs to discuss the [architecture proposal](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) around opening up the currency rates in the DWH that we arranged a couple of weeks ago. In our meeting, we discussed the different options on how we can ensure that all the applications in Superhog can get the currency rates their need, pondering on the pros and cons of different approaches.
+
+We came to a decision and now have an agreed design in mind, which we have documented here: [ADR: SQL Server rates mirroring](https://www.notion.so/ADR-SQL-Server-rates-mirroring-13f0446ff9c980d4a559fcfdaf251499?pvs=21)
+
+We will soon start to sort out the low level details of the implementation with Ben and after that, it will be building time.
+
+## Guest KPIs Report
+
+We are excited to announce the launch of a brand-new Power BI report designed exclusively for the Guest Squad. This report provides an in-depth view of the squad's KPIs, offering actionable insights into the metrics that matter most to their success.
+
+The report includes dynamic visualizations, trend analyses, and comparison tools to help the team monitor their performance effectively. It has been thoughtfully crafted to address the unique needs of the Guest Squad, ensuring they have the information they need at their fingertips to make data-driven decisions.
+
+This marks the first step in a larger initiative to create tailored reports for all teams across the organization. Our goal is to empower each squad with the tools they need to track their KPIs, uncover opportunities, and drive success.
+
+
+
+## Data requests
+
+As the year draws to a close, the Data Team has been busier than ever, with a significant increase in data requests pouring in. From custom analyses to ad-hoc reporting, the demand for insights has kept our team, particularly the Data Captain, working at full capacity.
+
+We understand the importance of delivering timely and accurate information to support your decisions, and we are doing our best to keep up with the workload. The end of the year is always a critical time, and with so many projects and initiatives in progress, our hands are full, but our commitment to excellence remains unwavering.
+
+## Adapting DWH to the latest Backend changes
+
+This week we did some adaptations to the DWH and respective Power BI reports to remove functionalities and data that will be dropped soon.
+
+To adapt on Guest Squad initiatives, we removed the modelisation of Address Validation that was mostly being used for Check-in Hero reporting. This implied the removal of some functionalities that were being used in the report, most notably the Funnel and the Purchase Record detail. We also removed any table regarding Address Validation within DWH.
+
+
+
+Final state of Check-in Hero reporting Funnel tab.
+
+On Dash Squad, we did a couple of small changes:
+
+Firstly, we dropped a column on SuperhogUser table that was not being used and will be deleted in the Backend soon.
+
+Secondly, we adapted the code to be able to retrieve the Claim property NewDashMoveDate indistinctly if the value contains a Date - which was the previous behaviour - or a Timestamp, increasing robustness. This should avoid any New Dash reporting issues once more users are moved from old dash to new dash in the coming weeks.
+
+## New Dash reporting initiative resumes
+
+This beginning of the week we managed to resolve some issues around BookingToProductBundle table thanks to the Dash Squad.
+
+After these latest fixes, we’ve resumed the modelisation of New Dash tables within DWH, with the aim to be able to report the adoption of the different Services - specially the paid ones - and the related revenue. At the moment this is being handled within DWH but we hope that soon enough we’ll be able to work on the Power BI itself.
+
+Lastly, we did some small improvements in the reporting: now in the User Detail tab we report for each user the e-mail and the Deal ID if available.
+
+# 2024-11-08
+
+## Invoicing Incident
+
+The highlight of this week is the incident we experienced with the Invoicing exports for the Old Dash earlier this week. A combination of shaky logic and out of the usual operations in our backend databases left us unable to export the bookings to be charged to Old Dash users properly.
+
+We mitigated the problem temporarily to unblock invoicing for this month, but we still have not applied a definitive, proper solution to the root cause. We will be working with stakeholders next week to fix it.
+
+You can read the post-mortem report of the incident here: [20241104-01 - Booking invoicing incident due to bulk UpdatedDate change](https://www.notion.so/20241104-01-Booking-invoicing-incident-due-to-bulk-UpdatedDate-change-82f0fde01b83440e8b2d2bd6839d7c77?pvs=21)
+
+## Currency Rates Architecture Proposal
+
+You might remember that months ago, back in June, [we ran a project in the Data team to integrate with an external currency exchange rates provider](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) (xe.com) into our DWH. Ever since then, we’ve been receiving currency rates on a daily basis and storing them in the DWH. This is what enables us to perform all sorts of cross-currency amount conversion, which are needed for many of the reports and exports that we produce for you.
+
+This week, colleagues from the tech team got in touch because some of our internal applications are planning features that will require performing currency conversions, so they are interested in learning about our data and how could they use it.
+
+We’ve prepare a design doc to discuss with them the best technical architecture to serve the rates to other applications within Superhog, and we will discuss it next week to hopefully come to a decision.
+
+You can read the design doc here: [Design Doc: Opening Currency Rates across our systems](https://www.notion.so/Design-Doc-Opening-Currency-Rates-across-our-systems-1380446ff9c9808e82ddf229e2976d2a?pvs=21)
+
+And you can learn more about our integration with [xe.com](http://xe.com) here: [[XE.com](http://XE.com) integration](https://www.notion.so/XE-com-integration-f9b1836b67f0474389e9a7284b683343?pvs=21)
+
+## Domain Analysts in the (Dataware)house
+
+[After starting out a couple of weeks ago](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), our Domain Analysts have made great progress. Alex and Jamie have completed the initial SQL training we agreed with them, and this week have finally received access to the DWH. It’s a great feat! They are the first Superhog employees *outside* of the Data Team to have straight access to the DWH.
+
+But that doesn’t mean their training is finished… They still have a lot to learn, both about SQL Databases and about Superhog’s DWH. To continue their journey, we have prepared [a set of challenges](https://www.notion.so/Domain-Analyst-Exercises-1370446ff9c980bea918ed122d8ddbc5?pvs=21) that will force them to sharpen their tools.
+
+We will spend the next couple of weeks helping them out complete the challenges and learn more about the DWH.
+
+## Guest tax cross-checks finally finished
+
+After many weeks and a lot of back and forth (it all started [here](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md))… we did it! Guest taxes are now computed in the DWH with great quality, and the Finance team is happy with the cross-checks that we’ve run.
+
+This means that the DWH doesn’t just contain how much guests are paying: we can also break down payments into taxes and revenue for Superhog. This will help us provide more accurate figures in different reports depending on what you need to know *exactly*.
+
+Be aware that our figures do not match to the cent with the accounting books: you shouldn’t consider them tax-declaration grade. But the deviations with our accounting figures are minimal, and thus we can confidently use them for business decision making.
+
+## Guest KPIs Report
+
+After successfully launching our Main KPIs report, we are now expanding our efforts to create specialized KPI reports for each squad, focusing on the metrics that matter most to their unique objectives. In collaboration with each team, we’ve defined key metrics and optimal data presentation formats that will support informed decision-making and real-time tracking.
+
+The Guest squad’s report is currently in development, with data models being built to capture all relevant information. A draft of the Guest squad’s Power BI report will soon be ready, providing an initial glimpse of the report’s structure and functionality. This project is expected to enhance each squad's ability to monitor progress and respond quickly to emerging trends, ensuring every team has the insights they need to thrive.
+
+## KPIs refactor: now live
+
+After some weeks working on improving the KPIs computation flow, we’re happy to announce that all metrics and categories displayed in Main KPIs have been successfully migrated to the new flow. At the moment, there’s minimal to no change on how the KPIs are displayed to business teams.
+
+Here’s the list of improvements that we bring with this new flow:
+
+- Entities are more granularly split depending on the purpose they identify. For instance, we used to have the computation of Xero metrics within a single flow; while now we effectively split Host Resolutions from Invoiced Revenue into two separated entities.
+- KPIs are computed at daily level with the deepest granularity needed, and afterwards aggregated into the desired time aggregation level. For instance, we used to compute metrics such as Created Bookings directly as a MTD+Monthly per category computation, and a Monthly by Deal computation for Main KPIs. Now, we have a common source of daily created bookings per any desired category that allows us more flexibility on the dimensions and aggregations that will apply to Created Bookings.
+- Centralised KPIs computation within DWH. Before, we used to have the different KPI source models divided within different DWH folders, while now these are centralised within a single KPI folder with their dedicated nomenclature.
+
+There’s still some additional work that we aim to complete in the following days, such as finalising the migration of the skeleton of dates needed for Main KPIs, improving the performance of daily segmentation models to speed it up, as well as updating the technical documentation. But the major part of work has been done successfully!
+
+# 2024-11-01
+
+## Data shows [Booking.com](http://Booking.com) is overtaking Airbnb in Europe
+
+Earlier this week, Leo got in touch with us sharing this article: https://www.holidaycottagehandbook.com/post/rates-up-in-europe-and-more-growth-for-booking-com?utm_source=newsletter&utm_medium=email&utm_term=2024-10-30&utm_campaign=Booking+com+growth+and+how+to+thrive+as+Airbnb+co-host. The main point from the article is how [booking.com](http://booking.com) is overtaking Airbnb as a booking channel in Europe, with some specific figures:
+
+> According to [Key Data](https://www.keydatadashboard.com/products/prodata?utm_source=referral&utm_medium=partner&utm_campaign=holiday_cottage_handbook), 47% of reservations in Europe come from Booking.com, while 40% come from Airbnb. Eleven percent of bookings are direct, while a small percentage come from Vrbo.
+>
+>
+> Since 2021, in Europe, Booking.com’s market share has grown from 32% to 47%, while Airbnb’s has dropped from 43% to 40%. Direct bookings have fallen from 23% to 11% – but this is likely linked to the COVID-19 pandemic. Vrbo’s market share, meanwhile, has remained flat at 2%.
+>
+
+Leo was sceptic about this: did our data match with Key Data’s statement?
+
+Uri came to rescue and shed some light on the topic. It seems we are in line with Key Data:, and our data also shows [booking.com](http://booking.com) having more weight than airbnb in Europe.
+
+
+
+Uri to the rescue.
+
+
+
+## Dagster experimentation
+
+This past couple of weeks, Pablo started a research spike to evaluate orchestration engines for our Data Platform. Orchestration engines are an important component we are missing in our infrastructure: their role is to organize the execution of jobs, mainly for moving data around, in an orderly way. Pull data from there, transform that table here, update that report there… We are currently doing this in a rather crude way, and we need to use an orchestration engine if we have any hope in scaling things so you can have the data and insights you need, when you need them.
+
+
+
+We couldn’t find an online image with only Dagster and Prefect. But this doesn’t look that bad, does it?
+
+The two strong contenders are [Dagster](https://dagster.io/) and [Prefect](https://www.prefect.io/). Pablo is currently busy giving Dagster a test ride to understand how well it would cover our needs. Once we are done with it, we’ll move to Prefect and compare the two. We hope to pick and deploy one of them in production before Christmas hits.
+
+## Guest taxes cross-check advances
+
+This week we finally got to implement [the recently speced out ruling about taxes for host-takes-risk waivers](https://www.notion.so/Guest-Services-Taxes-How-to-calculate-a5ab4c049d61427fafab669dbbffb3a2?pvs=21) in the DWH, which means we are now in sync with the finance teams in terms of how to compute taxes for the sales of services to guests (Waivers, CIH, etc).
+
+[We’ve repeated our cross-check procedure](https://www.notion.so/20240917-Guest-payment-taxes-cross-check-1bbda7c2145c4d60979d049565c1443b?pvs=21) with the new ruling and our numbers are almost matching, so we are currently waiting for Suzannah to give us a thumbs up so we can finally say that our revenue KPIs are reliable and take taxes into account they way it’s meant to be.
+
+## A/B Testing warming up
+
+One of the most exciting novelties we bring this quarter is Superhog’s first A/B test.
+
+If you are not familiar with A/B testing, you can read a bit on the practice here: . It’s a very simple concept which yields a lot of value, even though it’s *not* so simple to implement.
+
+This quarter, the Guests Squad and the Data team will team up to run a first A/B test on some of the Guest Journey UX details. We will play with what guests see when going through the journey, hoping to improve conversion rates and revenue metrics.
+
+We’ve already agreed with Joan on the details on what changes will we tested and what metrics we will be on the look for. We also discussed this week with the whole squad the technical implementation details: how to show different guests different versions of our journey, and how to ensure we track all data properly so we can compare and study afterward.
+
+
+
+## Refactoring KPIs: first daily models computed
+
+This week we’ve continued advancing on modifying the logic of the data flows used for KPIs. As [explained last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), the goal is to be able to accommodate more needs - such as having daily metrics, product specific KPIs and more - in a centralised, scalable and likely more flexible way.
+
+Be warned! this post is a bit more technical. The current architecture is the following:
+
+- **Daily stage**: We compute a set of metrics at daily level, at the deepest granularity that we need. We do the huge part of business logic at this stage. Data is materialised into a table to speed-up performance.
+ - For instance, we can have created bookings at daily level, and split among the Deal that these can be attributed to, Billing Country, etc. At the same time, we can know if these bookings have been created in the New Dash or in the Old Dash, the Listing Segmentation for that client and date, etc.
+- **Time aggregates**: We can easily aggregate each metric into any desired timeframe. Currently, our Main KPIs use only Month-To-Date (MTD) and Monthly, so we have 2 aggregates per model - but we can basically do whatever we want: Yearly, YTD, MoM, etc.
+ - Continuing with our daily created bookings, we can have these aggregated at monthly or MTD level, while still keeping the granularity on the dimensions. For instance, monthly bookings that come from a certain Billing Country, Deal and that are from Old Dash.
+- **Dimension aggregates**: These mainly provide the capacity to make models agnostic on the categories used to split the original metrics. Basically these follow a strategy of given a time range, a dimension and a dimension value; provide a metric value - basically the current setup for MTD/Monthly data display on the Main KPIs report.
+ - For instance, monthly created bookings by Deal. Or MTD created bookings by Billing Country
+
+
+
+Current production-ready KPIs following the new strategy. We have 9 entities related to Bookings, Guest Journeys and Guest Payments. The red box encapsulates all Daily models. In the green box, we have the Time Aggregates. In the brown box, all Dimension Aggregates. Lastly, in yellow, we have some temporary tests to validate that the final output is the same as we have currently deployed for reporting purposes.
+
+Since this is a business critical refactor, we’ve added some additional tests to ensure the quality of the data. The reality is that for all metrics that are currently in Main KPIs, we just need to ensure that the new ones computed with this strategy have **exactly the same value** for a given date and category.
+
+At this stage everything is advancing without blockers, under the hood of DWH. We already have 9 entities comprising mostly Bookings, Guest Journey and Guest Payments metrics that are ready to be deployed. Starting Monday, we will start deploying these new metrics and deprecating the old models to minimise the transitional parallel flow that we currently have.
+
+In less words for all our KPIs lovers: you should not observe any difference and all this magic will happen without being aware 🧛🏽♂️
+
+## Updating how we retrieve New Dash users
+
+This week we’ve also did a quick implementation to improve the way we’re tracking that a user is in New Dash.
+
+With the help of the Dash Squad, that have put in place a more precise logic, we’re now able to easily determine if:
+
+- A User is in New Dash or not
+- In which New Dash version a user has first appeared
+- If a User was originally moved from Old Dash to New Dash and when did it happen
+- If a User was directly created in New Dash
+
+This will help us in the reporting of New Dash performance, as well as be able to track V2 users as they get created or moved in the future.
+
+Special thanks to Luke for the support on this subject!
+
+# 2024-10-25
+
+## New Churn metrics are available in Main KPIs
+
+This week we’ve finalised the main line of work on computing and reporting Churn figures. As we explained last week, the volume of churning accounts is not a good representative on the impact it has to our business - mainly because small hosts that are churning will have a much more limited impact in terms of Revenue, Listings and Bookings with respect of a big host churning.
+
+We’ve come up with 3 different Churn Rate metrics, which measure the relative impact the churning accounts in a month have with respect of the overall business. Specifically, these metrics are:
+
+- Bookings Churn Rate
+- Listings Churn Rate
+- Revenue Churn Rate
+
+The detailed definition of these rates is accessible in the Data Glossary of Main KPIs, as described in the screenshot below (double click to make it larger!)
+
+
+
+These metrics have been validated with Matt and Suzannah and have been made accessible in Main KPIs, specifically in the tabs:
+
+- MTD
+- Monthly Overview
+- Global Evolution over Time
+- Detail by Category
+
+This also means that these metrics can be seen as well by the 2 categories: Billing Country and Number of Listings segment.
+
+Overall these metrics show quite a bit of volatility in a monthly basis because the share of Revenue, Bookings and Listings each churning account has can deeply vary over time. We do not observe any clear pattern of seasonality at this stage.
+
+
+
+Example of the volatility we have for these new metrics in the Global Evolution over Time, in some random selection. This is linked to the fact that the different Churning accounts have diverse characteristics. Please go to the report to see the actual figures.
+
+That’s it for the time being for the Data line of work in the Churn subject. But likely we’ll work more on this in future months to come. Very exciting project and nice collaboration with Alex, Suzannah and Matt!
+
+## Improvements in Account Managers reporting (previous Top Losers)
+
+This week we’ve also finalised the line of work of Top Losers → well, now it’s called Account Managers reporting.
+
+The main improvement - aside of the name of the report - has been integrating Hubspot key attributes of each account, such as the Account Manager each account is assigned to, the moment an account went live, went it was offboarded - if it’s the case, etc. This allows for more flexibility when exploring the performance of each account and the impact it can have for our business.
+
+Additionally, we’ve included Listing information of each account as well as other minor changes, such as renaming TOP LOSERS to MAJOR DECLINE; TOP WINNERS to MAJOR GAIN, etc. These new names should better reflect the meaning behind the scoring computation.
+
+Even though the reporting is quite simple - just one interactive tab - the possibilities are quite interesting. For instance, in the screenshot below we can see the performance of 2 accounts: one categorised as MAJOR DECLINE (purple) and another as MAJOR GAIN (green).
+
+
+
+Example with 2 account selection. Green line is categorised as MAJOR GAIN while Purple line as MAJOR DECLINE.
+
+We can visually see how the purple line has been decaying over time, while the green one is increasing - specially in the Revenue chart. In this case, both accounts are currently in the lifecycle state of Active, meaning these have not churned - yet we can clearly see the decline of the purple account: time to act!
+
+The fact of actually being able to quantify both the growth of the account and the impact this growth can have to our business is key on prioritising retention efforts. That’s why specifically focusing on those accounts that are tagged as MAJOR DECLINE but that have not churned yet can be very meaningful on ensuring the long-term objectives of Truvi.
+
+Lastly, we’ve also conducted a training session with the account managers to explain how to use the report and what kind of information can be found. Hopefully it will become a key report for better data-driven actions from AM side!
+
+## How are we going to handle more demand for KPIs?
+
+At this point you’ve seen we’ve closed two lines of work this week: Churn definition and the Account Managers reporting, to - hopefully - help understanding and preventing Churn. Cool, what’s next?
+
+Well, the reality is that these two lines of work have some clear dependencies within DWH with our main data modelisation flow of KPIs, and has proven to show some limitations. This by itself is not a big problem, but we need to take into account how are we going to accommodate future needs for KPIs. And these needs are already here!
+
+In essence, we will start shortly with the dedicated Product KPIs, mostly for Dash and Guests as a first start. However, we had a decision to make - are we going for a centralised approach, meaning, computing all KPIs within the same flow or having dedicated flows for each reporting line?
+
+In short, centralising seems a better option, specially because there’s some needs that clearly will be needed between multiple reports: for instance, if there’s a new product launch, likely we’d like to see a more detailed version of the product performance while still report product revenue in Main KPIs. And handling a similar modelisation within 2 flows will become messy overtime. Specially if it’s 3 flows instead of 2. Or 4. You get the point.
+
+This has some drawbacks though, which mostly are related to the fact that we do not store (yet) daily figures to have quite a speedy process, which we should do since it opens up many more reporting possibilities. Additionally, we’d need to be able to contain many more metrics, categories, etc., in order to fit all needs.
+
+
+
+Even Boromir agrees…
+
+So how can we be able to scale up the KPIs flow to accommodate these and future needs?
+
+Well, the answer is… we’re working on it. Some areas are quite straight-forward to tackle, others a bit more complex. But more complex means funnier - *such as the 80 million records of a potential daily listing segmentation*. At the moment we’re currently doing a proof of concept and so far it’s going great. With a bit of luck we’ll have more interesting news to share soon on this subject.
+
+# 2024-10-18
+
+## Athena and e-deposit databases have been successfully split
+
+[After some weeks of preparation](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we finally pulled the trigger together with the API Squad and performed the migration of the Athena/e-deposit database.
+
+The records for both services lived together in a single database up until last Wednesday, but that won’t be the case anymore. After the changes made by the API Squad, Athena (aka Guesty) records will keep appearing in the same database, but e-deposit records will land in a new one.
+
+On the DWH side, we replicated the split by going from one to two ingestion pipelines into the DWH, one for each database. This split also got propagated throughout the DWH, with independent table and pipelines handling the two services. Despite this, related reports will keep showing the same data thanks to the prep work done by the Data team.
+
+In the end, this is a technical change with little noticeable impact as of today, but it will help us work better long-term. Kudos to the API Squad for the good work.
+
+## Domain Analysts Programme begins
+
+This week we also began work on one of the most exciting goals we have for this quarter: onboarding our first Domain Analysts!
+
+Let’s cover definitions first: our vision for the Domain Analyst profile is a hybrid between a Data Analyst and a functional expert in some area (Marketing, Finance, Product, etc). This is, someone how is extremely knowledgeable on some Domain, and counts with Analyst knowledge and access to [Superhog’s Data Platform](https://www.notion.so/Data-Platform-908fafb0c4b345139a89d8684c281d24?pvs=21). With this combination, the Domain Analyst can be a reference expert that helps leverage Data in its specialized area. To clarify, we expect these analysts to sit within teams in their domains, not within the Data team itself. With this, we achieve a bit of a hub-and-spoke set up that balances centralization and decentralization in Superhog’s data expertise.
+
+And no, we haven’t hired any new faces. Instead, we’ve decided to grow the talent internally! Alex A. and Jamie D. have been selected to become our first Domain Analysts. They have been doing a great job with us for a long time, and it shows since they are already acting as go-to people within their areas. With their onboarding, we expect them to increase their skills and, simply, do what they are already doing, but even better.
+
+During the next quarter, the Data team will work together with them to improve their skills on databases, SQL, data modelling, and other tools that any analyst should have in their belt. By the end of the quarter, we expect to all be wondering how did we manage to survive so long without these powers in their hands.
+
+Good luck Alex and Jamie!
+
+## Athena claims negotiation support
+
+Following up on our [last weeks update](https://www.notion.so/Data-Platform-908fafb0c4b345139a89d8684c281d24?pvs=21), this week we continued analysing the patterns on our Athena (guesty’s e-deposit) API to better understand them and provide intelligence around some ongoing terms negotiation.
+
+We discussed together with the involved team the Data and were able to draw interesting insights that will drive some of the points in our proposals to Guesty.
+
+Now that the Data is served, it’s up to our expert negotiators (Leo and Humphrey) to seal the deal.
+
+## Top losers report is now live
+
+In the previous entry of the Data News we presented a [work-in-progress report](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to be able to track the performance of the different accounts and categorise them based on growth and impact. During this past week we have finalised the Power BI reporting and shared it with users to gather impressions and improvements.
+
+We already implemented some small changes, such as quantifying the amount of Bookings each account has had in the past and provide a summary table.
+
+With the integration of Hubspot data we will be able to enrich the reporting by identifying key aspects of each account, such as the moment these accounts went live, when they have offboarded (if it’s the case) and even the Account Manager that is assigned to each account.
+
+## Churn definition update
+
+This week we made great advances towards the definition of Churn rates. As a reminder [from last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), the goal is to measure the impact our churn is having in different indicators, mainly Revenue, Bookings and Listings. Why?
+
+Well, imagine in month A we had 50 accounts churning and in month B we had only 10. We could be tempted to say that month A was worse in terms of churn than month B, but that might be inaccurate. Maybe those 10 accounts that churned in month B actually provided much more revenue and bookings that the whole 50 accounts of month A!
+
+The problem is how can we quantify in a monthly basis these Revenue, Booking and Listing Churn in terms of rates, meaning attributing a given % of Revenue, Bookings and Listings over Churning accounts with respect of the global. After some refinement sessions with Matt and Suzannah, we prioritised Revenue and Booking Churn, and ended up with two measuring possibilities, each one with its pros and cons - it’s a bit technical so we won’t go into the details. At that stage we decided the best was to actually compute both possibilities and take a final decision based on real numbers rather than assumptions.
+
+First things first though, we needed to update our consideration for Churning accounts in a given month. With the recent integration of Hubspot Deals data into the DWH and the outstanding support from Alex A., we managed to implement a more accurate definition:
+
+> A Deal is considered as churning in the month that:
+>
+> - The contract is marked as offboarded in Hubspot, **OR**
+> - last booking created was 13 month ago
+
+This definition has already been implemented for the metric **Churning Deals** and is **fully available in Main KPIs reporting**. This has also improved the quality of the Deal Lifecycle, since we’re able to capture Deal offboarding, which is much more precise than our previous definition. No change has been made in the Listing lifecycle or Churning Listings at this stage.
+
+Lastly, the new metrics for Revenue and Booking Churn rates are at this moment under validation. Once validated, we will provide the final definition and make these new KPIs available in the Main KPIs reporting. Special thanks to Alex Anderson for his massive support on this subject!
+
+# 2024-10-11
+
+## Guesty claims analysis
+
+This week we got pulled in to support some data-driven decision making around our Athena service (e-deposit for Guesty). Leo and co. were interested in understanding the patterns of claim posting by the different partners that come through this channel. This would better help us propose terms better in an upcoming contract negotiation we will have.
+
+Our work is still WIP, but we’ve already been able to identify very distinct claiming patterns across the partner base, which is exactly the kind of fact we needed to learn about to support the business decision. We will keep working on this together with the team.
+
+This is also a great example of how you can rely on the Data Team for support in your day to day!
+
+## Churn metrics and Top losers report
+
+This week we started investigating on the different forms of Churn we have in Truvi. In essence, Churn is a measure of the clients that at some point in the past had some activity within Truvi and now not anymore. Having a proper **Churn definition** can help the understanding of the general trend of the business, as well as identifying areas of improvement by providing further knowledge on seasonality and reasons for offboarding.
+
+At this stage, we have two separated Churn definitions, at deal level: the RevOps one, mainly, when did a client offboard and the Data one, which is based on the [Deal Lifecycle](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) and tags clients as Churning if their last booking created was 13 months ago. First things first, we are in need to align the definitions to have a coherent understanding of churn among the different areas in the business - and likely, the definition will take a combination of both definitions.
+
+Once we have this definition setup, we also need to attribute activity metrics to these Churning customers. This will include being able to measure **Revenue Churn**, **Booking Churn** and **Listing Churn**. While the churn definition discussion is still ongoing, you probably have noticed that the previous definitions are quite strict in the sense that the possibilities of acting to retain these customers once these are tagged as churned are quite minimal. This is when we discuss **Churn Prevention**, and it’s a different line of work.
+
+In order to anticipate which accounts are “at risk of Churning soon” or even “have had some recent decay in performance”, we are currently building a **Top Losers** report, to help Account Managers on identifying and prioritising retention efforts. At the moment the idea is to provide a very simple reporting that provides two scores:
+
+- **Growth**: as a data-driven measure to identify if the account is growing or decaying
+- **Impact**: the impact in terms of revenue a certain growth can have
+
+Thus, if Growth is negative, meaning the account is decaying, we would be able to prioritise efforts by the revenue Impact it can mean for Truvi. Lastly, this work-in-progress report can be enriched with the data we are ingesting from Hubspot, thus it’s a clear use-case for [the work we’ve been conducting in previous weeks](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+
+
+Our one-pager Top Losers report. This is still WIP and might change in the future.
+
+That’s it! This Churn project it’s still in early stages at the moment so keep tuned for future updates in the coming weeks.
+
+## Migration of e-deposit database
+
+This week we’ve finally aligned all the pieces with the API Squad for the upcoming Athena/e-deposit database migration. Next Wednesday, the API squad will be splitting the current database into two distinct ones: the existing one will remain the DB for Athena, while the new one will be dedicated exclusively to the other e-deposit clients.
+
+The Data Team has prepped to keep ingesting everything correctly in the DWH, and we aim to support this migration while keeping our reporting up and running without downtime.
+
+Will keep you posted and hopefully come back next Friday with the announcement that everything took place successfully.
+
+## dbt meetup
+
+Last week Data Team went out for lunch and continued the afternoon with a quick field trip to the local Barcelona dbt meetup. dbt is one of the core technologies we use in the Data Team, and without which doing the work we do would be *really* hard. There is a local community that holds technical meetups every couple of months, and these are typically attended by other data, product and software professionals.
+
+All of us attended the meetup for the first time, and we had the chance to learn how colleagues from other companies work with Data in their businesses.
+
+
+
+Superhog’s Data Team in the first row. Pablo’s pony tail finally serves a useful purpose.
+
+## A/B testing discussions
+
+This week we’ve also started the discussion to launch the first A/B test within the Guest Journey, with the goal being launching and finish an A/B test before the end of Q4.
+
+The idea of A/B testing is to randomly split traffic into 2 groups, A and B; and group A will see a different setup of the Guest Journey with respect to B. Generally, one of the 2 groups is the same as the current state, thus referred as the Control group, and the other has some controlled changes with respect to the Control group - different pricing, different visuals, etc - usually named as Study group.
+
+A/B testing can provide many benefits on understanding cause-effect when there’s a new deployment, since we’ll be observing the performance of two groups in a controlled environment, at the same time, which reduces the seasonality bias. Also, it ensures that if there’s a buggy deployment in place, the negative impact is minimised since it’s not affecting all traffic.
+
+
+
+We’re starting small at first - we want to do this right. There’s plenty of things that can go wrong in A/B test implementation, and wrong configurations means wrong results which leads to poor decision making. But once we succeed on this first A/B and ensure the process is right, taking decisions based on actual measurable performance will be much easier in future months.
+
+## Main KPIs Significant Renaming
+
+On Thursday 10th of November we’ve deployed new changes in the Power BI report of Main KPIs within Business Overview. With the goal to better align with Finance figures, now **Guest Revenue won't deduct anymore the amount that is paid back to hosts**, and in contrast, will identify any guest payment after taxes. **This also impacts the figures in Total Revenue** and weighted revenue figures. The detail of the Waivers is still available (revenue, payments to hosts and retained amount).
+
+Additionally, there has been a **renaming of revenue KPIs** to clarify the meaning of the values observed. We encourage the users to check the updated Data Glossary for further information.
+
+Find below a summary table of the changes:
+
+| Metric name | Metric previous name | Computation changes |
+| --- | --- | --- |
+| Total Revenue | Total Revenue | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Total Revenue per Booking Created | Total Revenue per Booking Created | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Total Revenue per Guest Journey Created | Total Revenue per Guest Journey Created | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Total Revenue per Deals Booked in Month | Total Revenue per Deals Booked in Month | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Total Revenue per Listings Booked in Month | Total Revenue per Listings Booked in Month | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Guest Revenue | Guest Revenue | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Guest Revenue per Guest Journey Completed | Guest Revenue per Guest Journey Completed | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Guest Revenue per Guest Journey with Payment | Guest Revenue per Guest Journey with Payment | Waiver amount paid back to hosts is no longer deducted, thus figure is now higher than before. |
+| Waiver Revenue | Waiver Amount Paid by Guests | - |
+| Damage Host-Waiver Payments | Waiver Amount Paid back to Hosts | - |
+| Waiver Retained | Waiver Net Fees | - |
+| Check-in Hero Revenue | Check-in Hero Amount Paid by Guests | - |
+| Deposit Fees Revenue | Deposit Fees | - |
+| Invoiced Booking Fees Revenue | Invoiced Booking Fees | - |
+| Invoiced Listing Fees Revenue | Invoiced Listing Fees | - |
+| Invoiced Verification Fees Revenue | Invoiced Verification Fees | - |
+| Invoiced Athena Revenue | Invoiced Guesty Fees | - |
+| Invoiced E-Deposit Revenue | Invoiced E-Deposit Fees | - |
+
+Lastly, Guest Payments and Guest Payments weighted measures have been deleted since they represent now the same as Guest Revenue and Guest Revenue weighted measures.
+
+# 2024-10-04
+
+## Q3 Data Achievements, bye bye Q3!
+
+October already! It’s been 3 busy months, and here at the Data Team we did tons of things: set up business KPIs, support product and finance initiatives and continue setting up our Data foundations.
+
+It’s time to close Q3 and start focusing on the latest quarter of the year. Before that though, we wanted to take a moment to write down our collective data achievements. Find the full content below:
+
+[Q3 Data Achievements ](https://www.notion.so/Q3-Data-Achievements-1130446ff9c9800e84e4f03750b752a1?pvs=21)
+
+Also, our dear Joaquín will be off for a couple of weeks to take some very well deserved holidays.
+
+## New Dash and New Pricing data integration moving forward
+
+This week we have been quite busy advancing in the reporting needed for New Dash. Our current goal is to integrate the new backend tables in the scope of New Pricing, at first, to be able to monitor New Dash V2 once deployed. Overall, we would like to set up monitoring on the Services and the revenue that these are brining.
+
+Specifically, we’ve integrated 9 new tables this week:
+
+- `ProductService`
+- `ProductServiceToPrice`
+- `BillingMethod`
+- `InvoicingMethod`
+- `PaymentType`
+- `Protection`
+- `ProtectionPlan`
+- `ProtectionPlanToPrice`
+- `ProtectionPlanToCurrency`
+
+… and we’ve started playing around with the data in order to propagate this information within the different layers of our DWH. Additionally, we’ve asked for some updates additional content in order to properly track which services are being applied to each booking. Since this is starting to get bigger, we’ve also spent some time to track the current status of the different new data flows in order to not miss anything important. It’s accessible through [this Notion page](https://www.notion.so/2024-10-02-Integrating-New-Dashboard-New-Pricing-into-DWH-1130446ff9c9804a9cb2f5d49e073bab?pvs=21). Thankfully we’ve counted with extensive support from the Dash squad, so many thanks to Dagmara, Gus and Yaseen!
+
+Lastly, there has been a couple of fixes on our current pipelines for New Dash MVP monitoring, especially thanks to Clay’s test account!
+
+## First steps towards dedicated Product KPIs
+
+A few weeks ago we started gathering requirements on the KPIs needed for the different product managers in order to bring more data-driven decision making on the product initiatives.
+
+At the moment, we’ve completed the first round of contacts in the main areas, as well as clarified the first drafts within Guest and APIs needs so we can start planning development on our side.
+
+
+
+Cool metric but… at which date do you want to attribute it, creation, check in or check out?
+
+Overall, very nice deep-thinking sessions. More on this subject soon!
+
+## Foreign Data Wrappers are here and analysts are loving them
+
+Last week, we rolled out a new improvement to our DWH development environment to improve the experience of the Data team members: our local Postgres environments now include Foreign Data Wrappers to the production DWH instance for sync schemas.
+
+[Foreign Data Wrappers (FDWs)](https://wiki.postgresql.org/wiki/Foreign_data_wrappers) are a very convenient Postgres feature. FDWs allow setting a connection within a Postgres server to some other external data store, making the tables in that external system appear as local tables which can be queried exactly the same as regular tables and views.
+
+We have leveraged them to allow analysts to easily access real production data in their local environments. This allows a very smooth developer experience: Data team members can build models in their laptops with exactly the same data that will be found in the production DWH, which makes spotting errors and performance issues much simpler and faster. This setup, together with our adherence to [trunk based development](https://trunkbaseddevelopment.com/) allow us to deliver changes to our production DWH fast, often several times per day.
+
+Even though this is a rather technical and internal improvement, you might notice it because Uri and Joaquín yell less frequently at their laptops since they don’t have to struggle with copying data anymore.
+
+## Continuing work on Hubspot data
+
+After [integrating our Hubspot instance with our DWH](https://trunkbaseddevelopment.com/), we’ve started to work on modelling it properly so that it can be leveraged to build analyses and reports. Hubspot tables are tremendously complicated and data heavy (some of our servers choked reading from them for the first time), so cooking them up to ease their consumption will be a must.
+
+We are teaming up with Alex A. to understand the different properties and decide which bits are most critical. We have decided to focus on Deals and Engagements (ie records about calls, mails, meetings, etc) for now, so we will try to leverage the most important fields out of those tables.
+
+Stay tuned for some great reports making the best out of this data very soon.
+
+# 2024-09-27
+
+## Progress in CIH and Cancellation API billing process definition
+
+This week we’ve made progress in documenting the billing processes that we will be followed for two of our new services: CIH API and Cancellation API.
+
+The Data team will be acting as a data bridge between the backend systems that support these products and the delivery of the invoicing data in Xero. This way, we will be able to generate invoices in our accounting system without any human interaction, allowing us to scale our services without major operational pains.
+
+You can read our WIP process documentation here:
+
+- [CIH API Invoicing process](https://www.notion.so/CIH-API-Invoicing-process-1060446ff9c980d5a5cdfaf253667bac?pvs=21)
+- [Cancellation API Invoicing process](https://www.notion.so/Cancellation-API-Invoicing-process-5c83b1465bb744f89d052232f39396bf?pvs=21)
+
+We will now wait for inputs from the Finance and API teams so we can settle for a final and complete definition before we move on to some tests.
+
+Hopefully, we will soon be making invoices for real sales!
+
+## HubSpot Data Now Integrated into Our Data Warehouse
+
+## HubSpot Data Now Integrated into Our Data Warehouse
+
+We’re excited to announce that we have successfully extracted and integrated data from HubSpot into our data warehouse (DWH). This is a significant milestone, giving us access to a wealth of valuable data that was previously kept in HubSpot. While we are still navigating the vast amount of information stored in HubSpot, we have started by focusing on what we believe to be the most relevant datasets for our current reporting needs.
+
+The integration of HubSpot data into our DWH provides significant benefits, including easier and faster access to key metrics by centralizing data, eliminating the need for manual exports or separate logins. This also enhances reporting capabilities, allowing for deeper insights into customer interactions, sales, and marketing performance. By combining HubSpot data with existing operational data, we can achieve more comprehensive analysis, offering a clearer understanding of performance, customer behaviour, and opportunities for improvement.
+
+## Updated Host Fees Report
+
+This week, the Host Fees report received a significant update aimed at improving user experience. The report now displays all payments in their respective currencies, accurately converted to GBP using the correct exchange rates. Additionally, new visualizations have been added, and existing data has been refined to provide clearer insights. These enhancements ensure a smoother, more efficient experience for users accessing the report.
+
+# 2024-09-20
+
+## New Dash MVP reporting fixes, preparing for V2 launch
+
+This week we’ve also fixed a downtime of the New Dash reporting, which currently contains the MVP performance. A couple of weeks ago, with the appearance of new MVP users, our data transformations were not able to process the data correctly. After some discussions with the tech team, we managed to find a way to properly track MVP performance, which we implemented and deployed soon after.
+
+In order to reduce potential future downtimes we are already preparing for the launch of V2. At this stage, we aim to detect moved users from the old dash to the new with the newly configured claim values - and if everything goes according to plan, it should happen automatically on V2 launch without any action from Data side.
+
+Of course though, there’s going to be some other actions to be handled by the Data team before the launch. We also need to improve the capabilities of the New Dash Reporting from which we have already gathered requirements and started understanding how to extract this data. So stay tuned and hopefully we’ll have more news in a couple of weeks!
+
+## Check-in Hero reporting now without taxes
+
+It’s been a while since we’ve been working to be able to deduct the [taxes on Guest Payments](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to have a more consistent computation of the different revenue sources. In regard with this initiative, we’ve also taken the opportunity to modify the [Check-in Hero reporting](https://app.powerbi.com/groups/me/apps/14859ed7-b135-431e-b0a6-229961c10c68/reports/8e88ea63-1874-47d9-abce-dfcfcea76bda/ReportSection?experience=power-bi) so the monetary amounts displayed are consistent with those of other reports, such as Main KPIs, which are without taxes.
+
+Now, the Check-in Hero amounts reported are without taxes. In some cases though we’ve decided to keep both with and without taxes amounts since it can be useful for different teams. In any case, the visuals properly identify the exclusion or inclusion of taxes.
+
+Lastly, we’ve taken the opportunity to do some small visual changes around the report to improve the user experience.
+
+## Guest Payments Report & Main KPIs now without taxes
+
+Just like with Check-in Hero report, we have updated both the [guest_payments - Power BI](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/01d5648d-1c0b-4a22-988d-75e1cd64b5e5/ReportSection?experience=power-bi) and [main_kpis - Power BI](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi).
+
+In the first report, you can easily see all guest payment values, both with and without payment, by applying a simple filter based on the user's needs. We have also updated some visuals on the report to enhance its usability.
+
+
+
+For the main KPIs, we have updated all guest payment values to exclude taxes, ensuring better alignment with the finance team and their reporting. Additionally, we resolved issues with rates that were not being calculated correctly when selecting multiple countries or groups based on the number of available listings.
+
+A small note on this: when selecting a category with multiple values, the rate metrics do not aggregate correctly. As a result, we do not display these metrics when multiple values are selected. We recommend reviewing rate metrics for each category individually.
+
+
+
+***When selecting 2 or more countries you won’t see any values***
+
+## More Cosmos DB integrations: Screening API
+
+Following [our recent integrations to DWH for e-deposit](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we’ve started to integrate as well Screening API verifications into the DWH. By using our internal tool `Anaxi`, we tested the new configuration and seems to be working well. This will allow in the following days the modelisation of the screening data within DWH, on which the Data team has much more versatility and migrate the source of the Screening API report in Power BI from Cosmos DB to DWH.
+
+
+
+This is why you don’t let Data Analysts do Data Engineer jobs 😛. Special thanks to Pablo for the knowledge sharing on this area!
+
+# 2024-09-13
+
+## Quarterly alignment with TMT
+
+[After some recent preparations](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we finally held the meeting together with the TMT to discuss the priorities and scopes of the Data team for Q4. It was a good session in which we were able to go over our proposal contents together with the team and discuss around them.
+
+Overall, we were pretty well aligned already (which is great news, in our opinion) and we only made minor edits in the plan. You can check the scopes we are aiming for here: [Q4 Data Scopes proposal](https://www.notion.so/Q4-Data-Scopes-proposal-75bf38ab8092471d910840ab86b0ec60?pvs=21)
+
+For now, we will focus on brushing up on the Q3 outstanding items so we can happily close the ongoing quarter.
+
+## Small updates on KPIs by Deal
+
+This week we’ve added a couple of improvements on the Main KPIs reporting, specifically on the Deal tabs.
+
+First things first: we have now an account name linked to the Deal ID, to better understand which account are we referring to when selecting a deal. Now, you’ll be able to select the KPIs per deal via the ID or the Name. Keep in mind though that currently we do not have a source of truth for “this ID Deal has this Name”, so the names displayed might be different than those that you might be used to. If you observe any inconsistency, please let us know!
+
+
+
+Name and Billing Country are now displayed in the Deal Comparison and Detail by Deal tabs. Additionally, a new filter on Deal Name is now available.
+
+Additionally, we are now providing the Billing Country of each Deal to ease up the comparisons. This is specially useful for whenever we’re comparing KPIs within different Deals, so we can easily understand where these hosts are located.
+
+In the following days we will wrap up this exercise of Main KPIs, since most of the requirements discussed have already been implemented.
+
+# 2024-09-06
+
+## We need your feedback!
+
+This week we sent a survey to assess the current process of handling Data requests. It's been ~3 months since we have implemented the process, so we would appreciate any feedback you might have around it to make it even better!
+
+
+
+Everyone is invited to answer - those who use the Data Request workflow, those who directly contact Pablo, Uri or Joaquín directly or those who have never requested any help from Data. Any input is valuable to improve!
+
+We promise it will take you 2 minutes to answer it. Here’s the link to the survey:
+
+→ [Survey link here](https://forms.office.com/Pages/ResponsePage.aspx?id=30IohpgpJki-qbcmvAHTp3Q6r_qBevlBs8LDITDWGdFUMlJWQjhHWE80SFI5QjVJQUdXU1MyN1RZNC4u)
+
+A massive thanks to all people who already submitted your feedback!
+
+## Now available: KPIs by host Billing Country
+
+A few days ago we implemented the first category - or dimension - within the [Main KPIs](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi) reporting, namely the [host segmentation based on the number of active listings](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+This week we have deployed a new category, the Country in which our clients are being billed in. This category reflects the location of the clients in terms of sourcing effort, but of course their listings can be placed anywhere in the world, as we saw in [this analysis](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+With this new dimension, we’re able to provide more specific insights based on the Host location. Let’s see some of them!
+
+1. Around ~80% of the Check-in Hero revenue comes from USA hosts:
+
+
+
+1. USA is steadily gaining market share over GBR in terms of number of Listings Booked in Month and Bookings Created:
+
+
+
+Listings Booked in Month evolution
+
+
+
+Created Bookings evolution
+
+Additionally, this week we’ve added a new metric, Billable Bookings. It’s been… well, very complicated to get the logic in place to compute them in the DWH and still we observe some small discrepancies here and there. In any case, the order of magnitude should be close to the reality thus we decided to move forward and expose this new metric. We just added the prefix “Est.” to specify that Billable Bookings are estimated.
+
+
+
+Lastly, Main KPIs report is now appearing on the top of Business Overview Power BI app. After many weeks of work, the most critical information of the revenue reports is already included in the Main KPIs reporting, as well as many other business-wide insights. So now Main KPIs leads Business Overview!
+
+## First steps towards business-oriented Data Tests on KPIs
+
+Now that we have [implemented an automatic way to test the behaviour we expect from the data](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we decided to improve the bandwidth of alerts by including some business logic…
+
+… and what better place that doing so in the main KPIs of the company!
+
+
+
+A real Superhog example of outlier - can you guess what is represented in the Y-axis?
+
+This week we covered the KPIs with a couple of meaningful tests:
+
+1. Ensure that the sum of a given set of metrics for any category cannot be higher than the Global reported value.
+ 1. For example: the sum of bookings created in the different host billing countries of USA, GBR, CAN, etc cannot surpass the bookings created stated in the Global category.
+2. Ensure that the latest values of a given set of metrics are within an “acceptable range” based on the previous history.
+ 1. For example: if the amount of bookings created usually are around 1k, we will raise an alert if one day we spot a value of 10k.
+
+Beyond extending the capacity to detect that our developments within KPIs are not flawed, we are in a much better spot to automatically detect and alert us if there’s any massive underlaying data issues - such as the increase of Checkout/Cancelled Bookings we had a few days ago.
+
+## E-Deposit Report Update
+
+Our team has been hard at work updating the E-Deposit report as part of our ongoing efforts to improve data handling. We’ve successfully migrated data from Cosmos DB into our Data Warehouse (DWH), enabling smoother integration with our reporting systems. This significant step forward will allow us to better manage, manipulate, and connect the data with existing tables for enhanced analysis and reporting.
+
+While the migration has presented several challenges, including issues with data integrity, we are actively collaborating with the development team to resolve these as quickly as possible. Our goal is to ensure that the final report delivers the most accurate and reliable insights, reinforcing our commitment to high-quality data.
+
+We expect the updated report to be available early next week for everyone who needs access. Stay tuned for further updates!
+
+
+
+## Quarterly planning: Our proposal is on the table
+
+Following up from [our last update](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve already finished up our proposal for which scopes and priorities we want the Data team to deliver for the company in Q4’24.
+
+The full blow proposal is quite detailed and still subject to change, since we will be discussing it next week in a meeting with the TMT. Nevertheless, you can check below a screenshot of its executive summary so you can get a feel on which way are planning to go.
+
+
+
+And again: if we have somehow managed to not be aware of something critical you need from the Data team, this is a last minute call to bring it up so we can properly plan for it. [Get in touch with us!](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21)
+
+# 2024-08-30
+
+## Automated Data Tests are now live
+
+When you are sitting on top of a lot of data, there are *tons* of things that can go wrong.
+
+As we keep on ingesting data from different systems into our DWH, and then transform it, mix it and prepare it there so it’s useful for everyone, many Data Quality issues can pop up. Missing records, duplicated fields, IDs that should be unique but are not, gaps in time series, negative prices… you name it.
+
+The quality of our data is a sensitive topic because many of you across the company are relying on it to make decisions and do your job. If the data that we show is telling lies, we will be building on top of air. Definitely not a good plan.
+
+
+
+Data Team checking whether the data in the DWH tastes good.
+
+To improve in this area, we have developed and deployed a new tool to test our data regularly. What does testing mean here? Well, it means that, throughout the DWH, we are setting expectations for our tables and columns and then checking if they hold true. These expectations are things like:
+
+- The ID of bookings should be unique in the booking table.
+- Records in the guest payments table should never be more than a day old, since we are loading data daily and there are payments every day.
+- Our exchange rate timeseries should have values for all days between today and January 1st 2020, without any individual days missing data.
+- The number of bookings created yesterday should be within 2 standard deviations of the daily average for the past month.
+
+We then have a little piece of software that checks all of these every morning and sends a warning to the Data team if any of our expectations is not met so we can remediate.
+
+Our test suite is still small, but we expect to grow it over time to catch more and more issues as soon as they happen. With this, we will achieve more and more quality and speed over time.
+
+## Preparing for quarterly planning
+
+As part of [our ways of working](https://www.notion.so/Data-Team-Organisation-81ea09a1778c4ca2ab39e7f221730cb5?pvs=21), we will soon hold our quarterly planning session with the TMT. In this session, we will review on the progress made during the previous quarter and discuss what are the top priorities and goals that we should set for the upcoming one.
+
+We are currently in drafting and discussion phase, with already [quite a few scopes and ideas on the table](https://www.notion.so/482df2f675d24173adbeb0c619e301c4?pvs=21) that we need to refine so we can have more concise discussions and agreements with the TMT.
+
+If you think there is something critical you need from the Data team that is not currently in our radar, this is the perfect moment to bring it up so we can properly plan it. [Get in touch with us!](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21)
+
+## Analyst Postgres setup optimization
+
+Everyone in the Data Team has permission to build tables in our DWH. This is the way we describe the logical steps that we follow to process data and get it pristine for everyone to read.
+
+Because it’s important to keep the DWH clean and working at all times since we are all relying on it, we don’t build these tables directly on it (on what we call the *production* environment). Instead, each member of the Data Team has a tool to replicate a working copy of the real DWH in their laptops. This way they can create, destroy, mess up and do anything they want there without breaking things in the production DWH.
+
+This is all nice and convenient… but laptops are not exactly Ferraris when it comes to computers. Since a few weeks ago, our team started having trouble with running things locally. Running the DWH computations took an increasing amount of CPU and RAM, and we started to literally be unable of running commands. This was affecting our productivity significantly, because developing new stuff for the DWH was becoming a pain in the ass.
+
+
+
+Uri’s laptop while trying to run the KPIs for all of Superhog in one go.
+
+To mitigate the situation, Pablo put his engineer hat on this week and designed some optimization settings to improve the performance of our local DWHs. And it worked like a charm! We are now back in the green, with analysts happily working without failed executions and our laptops eating them like champs.
+
+## Migration of Cosmos DB reports to DWH
+
+We're excited to announce that we're currently migrating all reports connected to Cosmos DB to our Data Warehouse (DWH). This transition is a significant step forward, allowing us to centralize our data and integrate it more seamlessly with other datasets. By moving to the DWH, we can now perform complex data manipulations more efficiently and enhance our reporting capabilities.
+
+This migration not only simplifies data connections but also improves the scalability and flexibility of our reports. With the data now in the DWH, we can deliver more accurate, comprehensive insights and better connect our reports to other crucial business metrics.
+
+We're confident that this move will greatly enhance our analytical capabilities and help us continue to provide high-quality, data-driven insights across the organization. Stay tuned for further updates as we complete the migration process.
+
+## New Dashboard reporting is now available
+
+This week we have finalised the ingestion and modelisation of the [minimum data for New Dash monitoring](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to our DWH. With this data available, we have created and published a new report in PBI that currently tracks some key indicators at global and New Dash user level.
+
+We have also incorporated an adoption funnel for users in the New Dash MVP to visually understand the main drivers or blocking points in the adoption.
+
+
+
+This [new report](https://app.powerbi.com/groups/me/apps/7197c833-dbf9-4d2c-bca1-95f74aec4b11/reports/f0bad5b7-d9d2-45ba-a3cb-d190dd91b493/1bbfbee419e040409b95?experience=power-bi) is expected to evolve in the coming weeks and months with new information and visuals as the New Dash initiative advances into following stages.
+
+## KPIs progress, working on Billing Country segmentation
+
+This week we did some under-cover, hidden work on a new segmentation for the KPIs. In this case, we’re currently developing the possibility to have the KPIs per Country, and specifically per Host Billing Country.
+
+We are currently reviewing the changes and checking if we are able to improve performance on the runs of these new models, since each dimension we’re including it’s making our runtime increase. However, these optimisations are also part of our job to keep things steady - and actually a very fun challenge!
+
+We will keep you posted once this segmentation is available in the [Main KPIs](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi) report!
+
+# 2024-08-23
+
+## KPIs categories: segmentation by number of listings is now live
+
+Finally! We’ve been working in allowing KPI split per categories - well, the technical word is dimension - for quite a bit of time already. It has been proved to be a challenging task because this has impacted literally all the existing data modelisation used for the KPIs, as well as it forced us to optimise the existing code to allow for better scalability.
+
+The result is that now we will be able to easily integrate new categories for the KPIs split. So far, we’ve only added a new category: a client segmentation, based on deal, on how many listings have been booked in the past 12 months.
+
+We have added 5 segments, inspired from what is available in Hubspot - even though the logic used for the computation is slightly different:
+
+- 61+: Customers that have 61 or more listings booked in the last 12 months
+- 21-60: Customers that have between 21 and 60 listings booked in the last 12 months
+- 06-20: Customers that have between 6 and 20 listings booked in the last 12 months
+- 01-05: Customers that have or more between 1 and 5 listings booked in the last 12 months
+- 0: Customers that have 0 listings booked in the last 12 months. This is a special case in which generally represents few cases of Deals that are churning, but still we want to report it since it’s part of the reality.
+
+> A small note: not all users have a deal, specially for historic values. So keep in mind that the aggregates of most of the KPIs by this category selecting all possible segments will be strictly lower than the reality. If at some point you want to just keep an eye on the total values, best is to use the Global category which represents the previous state of the KPIs.
+>
+
+This is how the “active” customer base in 2024 (until June) looks like:
+
+
+
+Data extracted from Main KPIs reporting, within the new tab Detail by Category.
+
+As you can see, around ~68% of the customer base is composed of small clients that have between 1 and 5 listings booked in the last 12 months. In the other hand, the smallest group is those clients that have 61 or more listings booked in the last 12 months, accounting for ~5.8%.
+
+Ok but… you might be wondering why is this important, anyway? Let’s show some examples!
+
+
+
+
+
+In this case we’re showing the distribution of 2 of the main sources of Revenue:
+
+- **Guest Revenue**, mainly an aggregation of Deposit Fees, Check-in Hero Fees and Waiver Net Fees (deducting the amount paid back to hosts)
+- **Invoiced Operator Revenue**, mainly an aggregation of Booking Fees, Listing Fees and Verification Fees
+
+> Note that APIs revenue is not displayed in this segmentation because API’s deals do not have listings associated.
+>
+
+The insights are quite straight forward: ****
+
+- **the big accounts** (+61, orange) **that represent ~5.8% of the customer base bring more than 50% of the Guest and Invoiced Operator Revenue**.
+- On the other hand, **the small clients** (1-5, pink) **that represent ~68% of the customer base bring around ~18% of the Guest Revenue and ~13% of the Invoiced Operator Revenue.**
+
+Interested on knowing more? You can use the new segmentation and the new tab Detail by Category to deep-dive in any existing metric!
+
+In the following days we will focus on fixing the monetary amounts discrepancies - the famous tax inclusive/exclusive subject, while at the same time we will continue to integrate a new KPI segmentation: by Country. Stay tuned for more KPIs!
+
+## New Dash MVP monitoring: starting to ingest the data into DWH
+
+This week we’ve also started to ingest the new tables related to the MVP, and more generally, to the New Dash into the DWH. There’s still a bit of work to do since the schema is new and we want to make sure we model it in a proper way to have consistency with the already existing data modeling for the current setup. Also, we’re trying to be smart and anticipate the following user migrations that will come from old dash to new dash in the following stages, so we can minimize future work.
+
+In the meantime, we’re still providing 3 times a week the ad-hoc extraction in order to [track New Dash MVP](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) performance.
+
+## New features in Check-in Hero report
+
+We're excited to announce the inclusion of a new feature in the Check-in Hero report—Address Validation. This enhancement allows us to track which Check-in Cover purchases have been rejected or failed, providing valuable insights into potential issues with specific hosts or accommodations.
+
+With the Address Validation feature, we can now identify patterns or trends in rejections, helping us understand the root causes behind these failures. This could reveal whether certain hosts or accommodations are frequently encountering issues that might be addressable through targeted support or adjustments.
+
+By leveraging this new capability, our team can work more effectively to ensure a smoother experience for our users, ultimately reducing the number of rejected Check-in Cover purchases and enhancing overall customer satisfaction.
+
+## Data Team Expands Capabilities, Unlocking Deeper Insights
+
+The Data Team has been hard at work, responding to an increasing number of data requests from other departments. This collaborative effort has not only helped meet the growing needs of our colleagues but has also uncovered new layers of valuable data that are transforming our understanding of key processes.
+
+One of the major breakthroughs is our ability to access more detailed information about guest journeys. This enhanced data visibility allows us to analyse which purchase options are available to our customers and assess their impact on user behaviour. Additionally, we've gained the ability to track whether guests are seeing logos from hosts during their verification process and compare how these visual cues may influence the speed and completion rates of their verifications.
+
+It is important to note that our current data certainty is limited to the present. However, the development team is actively working on providing access to historical data, which will allow us to analyse trends over time and further enhance our understanding.
+
+## Xero Invoicing and Crediting report is now fully live
+
+A long while ago, we started developing and Invoicing and Crediting report in PBI. This report is driven by the details of invoices and credit notes in our accounting system, and can satisfy a lots of needs around sales:
+
+- Finding out how much we’ve invoiced a customer…
+- which of his payments are pending…
+- or how much we’ve paid out in Damage Waivers back to hosts on some month
+- The report allows both visualizing individual deal and global amounts, as well as high level totals or transaction level detail
+
+The report was ready for some time but was awaiting data quality checks from the Finance and Data teams. We’ve finally managed to conclude them and the report is now ready to use!
+
+If you feel you could use some data from this report, or think should have access to this report, please get in touch with Jamie Deeson in Finance.
+
+## Successful DWH integration of our first Cosmos DB container!
+
+Great news! After months of research, spinning around and hard work, we’ve finally integrated our first Cosmos DB container into the DWH!
+
+
+
+Pablo when he saw the first records flowing to the DWH.
+
+[Our tests](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) with `anaxi` have been successful and we have connect the e-deposit verifications database to the DWH. This allows us to access the e-deposit data within the DWH, and leverage it along all the other data from our systems.
+
+We also took the chance to meet with Ray and Manu, who are the leads of the tech squads using Cosmos DB, to align and discuss how we will work together from now on. You can read more about this in our documentation:
+
+- General docs on the integration: [Cosmos DB Integration](https://www.notion.so/Cosmos-DB-Integration-2f780b754cd948f38051dfb30d3a5beb?pvs=21)
+- Teams alignment: [Dependency management](https://www.notion.so/Dependency-management-7341e3d98f69424090bd9a2f3b227472?pvs=21)
+- Inventory of live integrations: [https://www.notion.so/knowyourguest-superhog/Integration-Inventory-6fc3900234f44f67a2ceb7274589d700?pvs=4](https://www.notion.so/Integration-Inventory-6fc3900234f44f67a2ceb7274589d700?pvs=21)
+
+Even though the effects have yet to be felt, this milestone unlocks a world of joy! We can now develop all sorts of reporting, automation and insights around e-deposit data. We will soon be migrating our old reports that read directly from Cosmos DB into the DWH, and also assist with building new stuff like further automation for invoicing e-deposit customers.
+
+# 2024-08-16
+
+## `anaxi` is ready for testing
+
+This week we managed to release the first version of `anaxi`, the tool we have developed to sync data between our Cosmos databases and our DWH. This is a great step towards leveraging this data for all sorts of purposes: developing new business KPIs around API services, automating invoicing of services like e-deposit, or setting up monitoring reports to track the performance of our services.
+
+We will soon begin tests with some our databases to polish the rough edges of the tool, and also soon align with our colleagues in the tech team since keeping this integration up and running in a smooth way will be a team effort.
+
+## Guest taxes logic documented, ready to implement in DWH
+
+This week we worked together with the Finance team to build some documentation around how should taxes be computed in the area of Guest services, such as damage waivers or CheckIn Hero covers. You can read it here: [Guest Services Taxes - How to calculate](https://www.notion.so/Guest-Services-Taxes-How-to-calculate-a5ab4c049d61427fafab669dbbffb3a2?pvs=21)
+
+This was relevant because [a few weeks ago](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) we realized that revenue metrics were being reported differently across some data and finance reports. We spotted taxes being the issue, and aligned with the Finance team that we would reproduce tax computations in the DWH to be able to report both tax-inclusive and tax-exclusive amounts in our reports. Documenting the tax logic was a first step to start work on this.
+
+Next, we will allocate some capacity to apply this logic in our data processing in the DWH. Once that happens, we will be able to show the right amounts in our reports.
+
+## Cosmos DB connection problem on Screening API fixed
+
+We successfully resolved an issue we were facing with extracting data from Cosmos DB and establishing a connection between Cosmos and the Power BI app. Working closely with the development team, we overcame the problem and plan to maintain closer communication to prevent similar issues in the future. The Screening API Report in Power BI is now up and running, currently featuring mock data, and is ready for the new tool to start functioning.
+
+
+
+## Check-In Hero address validation
+
+With the rapid growth of Check-in Hero, we are continuously monitoring our services to ensure everything functions correctly and to identify new opportunities to enhance the guest experience. We now have the capability to detect any instances where attempts to add this cover have been rejected by the system. By analysing these patterns, we aim to resolve these issues and offer this additional security to as many guests as possible.
+
+We are currently integrating this data into our Check-in Hero reports in Power BI, making it easily accessible to all interested in it.
+
+## Payment options in Guest Journey
+
+In recent weeks, we've received requests for information about the available payment options provided to our guests during their verification requests. Unfortunately, we've found this process to be complex, with unreliable data for historical records. At present, we cannot reliably determine which services and at what price they were offered to guests in the past; we can only extract data on what is currently being offered.
+
+The development team is actively working on fixing this bug, but we don't yet have an estimated date for resolution. Even after the fix, it may not be possible to recover historical data.
+
+While we hope this issue is resolved as soon as possible, we can at least now extract data on the services presently being offered to customers in their Guest Journeys.
+
+You can find more details in our notion documentation:
+
+[Payment Validation Set data problems](https://www.notion.so/Payment-Validation-Set-data-problems-2382b2ecb24243449caac4687f044391?pvs=21)
+
+# 2024-08-09
+
+## Currency rates historical loading is finished
+
+Finally!
+
+[After two long months](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) and many thousands of API calls to xe.com, we finally have stored in our DWH the exchange rate history for all the currency pairs we work with since January 1st 2020 to today.
+
+This took a bit longer than we would like to because we have a monthly limit in how many times we can request rates from xe.com, so we had to spread the load across time to avoid hitting our monthly caps.
+
+From now on, we will simply keep on getting new rates on a daily basis.
+
+## Cosmos DB integration research shows first results, starting development work
+
+This week we continued [our work around finding a good system design to bring data from our Cosmos DB databases into our DWH](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+After running some tests on Azure and diving deep into Cosmos DB capabilities and connectors, we finally have a draft design that we are ready to implement and test. The plan is to develop in-house a small Python tool to perform the syncs between any of our Cosmos DB and the DWH. Our assessment indicates that this solution and would do a great job at covering our needs.
+
+We are planning to run a Proof-of-Concept: we will implement a first version of the tool ASAP and then try to hook one of our Cosmos DB containers into our DWH in production. If everything o
+
+As a reminder you can keep track of this project through this page: [CosmosDB <> DWH Integration Project](https://www.notion.so/CosmosDB-DWH-Integration-Project-e87c41a93d9f484c842261eb55517470?pvs=21) . If you are technically inclined, you can also checkout the tool repository here: [https://guardhog.visualstudio.com/Data/_git/data-anaxi](https://guardhog.visualstudio.com/Data/_git/data-anaxi)
+
+## KPIs progress
+
+As usual, this week we have been working on KPIs. Besides the [exploration of MetricFlow](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve also been working on several open fronts:
+
+- Firstly, we’re doing advances on providing **KPIs split by different dimensions**. First things first, we needed to tackle some improvements on performance to accommodate for these new dimensions. At this stage we’ve validated the approach we’re going to take and we have started building KPIs by customer type, segmented based on the number of active listings. It will still take a while until we can have this split available in the report though, but the initiative it’s advancing.
+- Additionally, we’ve also took some time to ensure there’s a minimum **technical documentation** on the KPIs as is today, containing meaningful information on the data workflow, how to add new metrics or even the Power BI report itself. This documentation is available here:
+
+[(Legacy) Technical Documentation - 2024-08-05](https://www.notion.so/Legacy-Technical-Documentation-2024-08-05-aa7e1cf16b6e410b86ee0787a195be48?pvs=21)
+
+- We’ve also deployed a visual change in the conditional formatting for metrics where 'more' indicates 'worse,' which we previously didn't account for. Additionally, we’ve formatted the values to make the amounts and units easier to understand. For example, with Churn, an increase in volume is now reported in red, and metrics like payments now display the currency symbol along with thousands separators for better readability of large numbers.
+
+
+
+- Lastly, we also had a discussion with Finance team to aim to understand and solve the **discrepancies on the figures reported between Data and Finance**, mostly linked to the [tax inclusion/exclusion in Guest payments](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md). With some new insights on the table we can now re-take the subject with the goal to fix it in the following days.
+
+## New Dash barebones tracking
+
+This week we also started monitoring a very minimal set of indicators of how the New Dash MVP is performing after the launch since July 30th.
+
+At this stage this tracking is quite ad-hoc, directly plugged into the Superhog backend in order to rush it and have some early data available.
+
+Here’s the main indicators we’re tracking so far, screenshot of Friday 9th of August:
+
+
+
+In future days we will start integrating the new tables linked to the New Dash MVP so we can work on creating a Power BI report.
+
+## MetricFlow research finishing
+
+Last week we explained that [we started investigating a new package called MetricFlow](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) that could help on the scalability of the KPIs, especially on adding new dimensions such as customer segments, countries, etc.
+
+This week we assessed the 2 remaining points to take a decision on whether we will move forward or not with this package, namely 1) materialising the data and 2) configuring multiple metrics with segmentations that depend on different entities.
+
+On the second point, it seems indeed it’s possible and it’s quite impressive how easy it is to do so *(well, once you have set up everything properly, which it’s not straight-forward)*. However, we didn’t find the way to materialise the MetricFlow queries into tables, so this looks like a blocking point.
+
+In summary, **we decided NOT to use MetricFlow because it’s in too early stages and would not fit our immediate needs**. However, once this product gains maturity, we don’t discard that it could be potentially a nice tool to configure business KPIs.
+
+If you are interested in knowing more about MetricFlow, you’ll find additional details of the research in this dedicated Notion page: [Exploration - MetricFlow - 2024-08-06](https://www.notion.so/Exploration-MetricFlow-2024-08-06-f45d91500ad7433d9ff4e094b8a5f40b?pvs=21)
+
+# 2024-08-02
+
+## Minimum Listing Fee is now being applied in Invoicing
+
+This month, we’ve extended our invoicing tools to apply a contractual term that many of our customers had agreed into but we had never applied so far: the Minimum Monthly Listing Fee. The agreement specifies that Superhog will charge a minimum amount each month as a Listing Fees, even if the set per-listing fee times the number of active accommodations falls below that.
+
+We’ve modified our code to apply this logic and the Finance team is already working this month with data that takes this contractual term into account.
+
+## Cosmos DB integration research continues
+
+After having to park this topic for some time due to other priorities, this week we’ve resumed our investigation on how can we sync the data in our multiple Cosmos DB containers to our DWH. As a reminder, you can read our documentation on this topic in this page: [CosmosDB <> DWH Integration Project](https://www.notion.so/CosmosDB-DWH-Integration-Project-e87c41a93d9f484c842261eb55517470?pvs=21)
+
+We’ve found a promising architectural pattern that could satisfy our needs. We will soon design a plan to test it out and confirm it’s validity, step after which we will craft a final design and discuss with the engineering team to have everyone aligned.
+
+## Revenue details now available in the Business KPIs
+
+This week we have released new revenue metrics that are more granular than what was already available. In essence, now we’re able to track for both the Global and Deal-based view of KPIs, the following metrics:
+
+- For Invoiced Operator Revenue:
+ - 🆕 Invoiced Booking Fees
+ - 🆕 Invoiced Listing Fees
+ - 🆕 Invoiced Verification Fees
+- For Invoiced APIs Revenue:
+ - 🆕 Invoiced Guesty Fees
+ - 🆕 Invoiced E-Deposit Fees
+- For Guest Revenue:
+ - 🆕 Waiver Net Fees
+ - 🆕 Waiver Amount Paid by Guests
+ - 🆕 Waiver Amount Paid back to Hosts
+ - 🆕 Deposit Fees
+ - 🆕 Check-In Hero Amount Paid by Guests
+
+Keep in mind that the Invoiced figures as well as the Waiver Amount Paid back to Hosts has the Invoicing delay, thus is not available in the current nor the previous month. Now that we’re in August, the data for June is fully available.
+
+**Last but not least:** we still have the discrepancy on revenue figures from Xero (Invoiced) and Backend (Guest related payments), so keep in mind that the revenue figures are not displaying correct data. You can learn more on this subject in the [previous entry of the Data News](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+## MetricFlow exploration
+
+This week we started to explore the possibility to create a semantic layer within DWH by using MetricFlow package.
+
+In essence, a semantic layer is a data aggregation layer ready for reporting metrics at different granularities, which could be very useful for the scalability of the Business KPIs initiative - but also for enhanced reporting and other initiatives in the future.
+
+We started exploring this MetricFlow possibility now that we’re interested on creating Customer Segmentations and Geography slices, which theoretically could benefit from this approach. At this stage we managed to solve version dependencies and configurate a very simple Bookings model with different aggregations - and we managed to retrieve the information by querying within MetricFlow! Here’s an example:
+
+
+
+In this example we’re using the metric total bookings. We can retrieve this without specifying any other command thus it returns the total amount of bookings we have available. We can also group it by booking state and mainly then it slices total bookings by the booking state. There’s much more advanced features (so far this is a very SQL query anyway…) but still under investigation 😄
+
+There’s 2 key aspects that we need to assess still, which are:
+
+- Is it possible to materialise these MetricFlow queries into tables that could be used afterwards for our Power BI reporting?
+- Are we able to configure multiple models with segmentations that depend on different entities, and more importantly, does it return accurate data?
+
+Likely we’ll continue the investigation by next week.
+
+## Updating Business Overview reports
+
+We are currently updating several Power BI reports in our Business Overview app, leveraging more accurate data to ensure the information we present is as reliable as possible. For the Host Fees report, we can now use real exchange rate data for the various currencies used by our users. Previously, without access to the currency details for each transaction, we made a crude aggregation of all booking fees. With the availability of each currency and their daily exchange rates, we can now provide a more precise total of the booking fees in GBP, so expect to see this number go down when updated.
+
+Parallel to Host Fees, we are also updating the E-deposit and Guest Payments reports in collaboration with the Finance team to better align with the numbers reported by each team. As mentioned last week, we are investigating the discrepancies between our teams, which are very likely tax-related.
+
+
+
+We are aiming to solve these discrepancies as soon as possible so we can ensure consistency and accuracy in our reporting.
+
+# 2024-07-29
+
+## Revenue figures discrepancy investigation
+
+This past week we aimed to investigate in greater detail the differences on the revenue figures between what we report as Data team in the Business Overview vs. Finance reporting.
+
+Seeing that guest-related revenues coming from guest payments were overestimated on Data side with respect to Finance side, we reached to the hypothesis that probably guest payments were tax inclusive in the backend, while the rest of invoiced revenues to hosts we’re certain that these are tax exclusive. After a quick discussion with Ben R, it seems this hypothesis is correct.
+
+As of today, this issue still persists and we need to tackle it in the following days with the help of Finance. In the meantime, keep in mind there’s some reporting areas that are showing incorrect insights, specifically:
+
+- **Business Overview - Guest Payments report**
+ - Waivers: total waiver amount charged (tax incl.) vs. amount paid back to hosts (tax. excl). The % Paid to Host is also impacted
+- **Business Overview - Main KPIs**
+ - Total Revenue and derived metrics combine both data from SH (Guest Revenue, coming from guest payments) and Xero (Invoiced Operator Revenue, Invoiced APIs Revenue). Also, keep in mind that by nature the different Revenue sources can be inconsistent between them.
+
+For a more detailed investigation, we invite you to take a look at this Notion page:
+
+[Data quality assessment: DWH vs. Finance revenue figures](https://www.notion.so/Data-quality-assessment-DWH-vs-Finance-revenue-figures-6e3d6b75cdd4463687de899da8aab6fb?pvs=21)
+
+
+
+## Business KPIs alignment - 3rd session
+
+In the [previous edition of the Data News](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) we explained the new deliveries that have been made available within the KPIs initiative for the second batch.
+
+This past week we had a sync session with the TMT to discuss further deliverables for the coming weeks. At this stage, the amount of available metrics has grown by quite a bit (a total of 37) so should be quite good for a first rough understanding of the business situation. Therefore, for this next delivery we’re not aiming to include tons of new metrics except for the more fine-grained revenue metrics that should help us on the revenue discrepancy investigation. Rather, we’re going to start to provide KPIs categorisations or segmentations.
+
+There’s tons of ways we can categorise the data: geography, currencies, sources of bookings/verifications, types of hosts, etc. To start with, we’re going for 2 main groups:
+
+- **KPIs by Geography**: most likely, we will do this by country. And actually it’s not fully clear when we say country, which country we are referring to… because a host can be based on UK, but have a listing in the US that is being booked by a guest from Spain. Ignoring the fact of the guest for now, we’ve noticed we don’t know that much up to which extent this host-is-based-in-X-but-has-listing-in-Y is actually happening, and since we’re data-driven, we conducted a [small analysis](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) 😊
+- **KPIs by Customer segmentation**: likely, this is going to be a kind of segmentation based on the amount of listings that a Client has active with us in Superhog. There’s already some segmentations created on this area, and we’re currently trying to understand how they are built and the motivation behind them to see if it makes sense to replicate those or aim to improve it. More on this, we hope, for the next Data News.
+
+Besides this, we will deliver a few more minor details to help the visualisation and usage of the reporting, starting with a few modifications in the graphic display of metrics that was deployed last Friday:
+
+
+
+Now we have the possibility to display a single metric split by Year, or if clicking the “Metrics” button…
+
+
+
+… we will return to the full timeline, in which we can select one or multiple metrics. Clicking in the “Years” button will move back to the previous screenshot. Impressive dashboard user experience thanks to our expert Joaquín!
+
+All the information of the KPIs session as usual is available here:
+
+[Business KPIs Definition (III) - TMT session 24th July 2024 ](https://www.notion.so/Business-KPIs-Definition-III-TMT-session-24th-July-2024-1bd5435844ac432f9161b1ccf4c4d062?pvs=21)
+
+## Host Billing Country vs. Listing Country
+
+Have you ever wondered the percentage of how many active listings are located in the US from hosts that are located in the UK, over the total active listings?
+
+Well, the answer is 2.36% of all active listings.
+
+
+
+Funny enough, I expected US and UK to lead somehow the ranking when I first thought about this example, and I was not that wrong. That’s why I initially believed that the most representative example of cross-country was going to be host in UK and Listings in US. But actually it’s not! It’s host in the UK with listings in Ireland, representing 2.61% of all active listings 🇮🇪. Nice auto-self-reminder of why we need to trust data, and not intuition 😇
+
+As [mentioned earlier](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we have conducted a quick analysis to understand these situations in which a host might be based on a given country but have listings in other countries. We call the first Billing Country and the second Listing Country. Here’s the main insights of this analysis:
+
+- 13.2% of the active listings are located in a different country that does not correspond to the host billing country.
+- 11.5% of all hosts with active listings have at least one listing located in a different country.
+- Finally, ~50% of the total active listings come from a 6.5% of hosts that operate in more than one abroad country. These hosts have ~26% of their listings abroad.
+
+There’s a few more insights and potential to deep-dive, so if you’re interested, we’re sharing the file here.
+
+- **Analysis here!**
+
+ [Analysis_billing_vs_listing_country.xlsx](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx)
+
+
+Let us know what do you think in the Slack thread!
+
+## Screening API report ready for deployment
+
+We have a new report ready to go for the new Screening API which will start working in August.
+This report will show information on verifications through the new Screening API, the types of verifications requested and if there was any problem which each of these. Right now we only have some mock data to work with on the report, but as soon as we have real data we will deploy this new report and give access to everybody that is interested in the data.
+
+
+
+I wonder what this *fakeuser@email.com* whose name is *Not Clay* is…?
+
+
+
+## New addition to the Check-in Hero reports
+
+This week we added some new data to one of the reports inside the Check-in Hero app. Inside the Host Data report there is a new tab with Hosts listings with Check-in cover active, here you will find information for each of these listings, like country or town, and how many covers have been purchased for each of these listings. It contains very similar data to the previous Host Details tab but to a more granular level.
+
+
+
+Wait to hear more from us as we are still working on more updates and new reports to help facilitate access to the data that we know will help our business keep growing.
+
+# 2024-07-22
+
+## Currency rates incident
+
+This past week we faced an incident on the currency rates retrieved from Xe.com. Specifically, on Thursday 18th our automated daily process failed and we needed to deep-dive into it to understand what was going on. Fortunately, with the help of the tech team we managed to identify and fix the issue the same day, thus the incident was contained within a relatively constraint timeline.
+
+You can check all the details of the incident [here](https://www.notion.so/20240718-01-Xe-com-data-not-retrieved-5c283e9aa4834323b38af0bff95477a5?pvs=21).
+
+## Improvements on Guest Satisfaction Report
+
+We are focused on enhancing the quality of our reports to facilitate easier understanding and better data exploration. A key part of this initiative involves adding data about the types of hosts from which verification requests originate. This improvement is aimed at providing more insightful analysis and reporting capabilities.
+
+**Host Types Included**:
+
+- **PMS (Property Management System)**
+- **OSL (Online Service Listing)**
+- **API/MANUAL**: Currently, we do not have the data to separate API and Manual requests, so they are grouped together.
+
+This data has already been integrated into the **Guest Satisfaction** report so users can easily filter by each of these host types.
+
+
+
+## Second batch of Business KPIs ready
+
+This week we put quite a bit of effort towards aiming to finish the second batch of KPIs deliveries.
+
+Now we’re able to see revenue metrics, from Host side (operator), APIs (Guesty + E-deposit) and the already existing Guest Revenue. With these we’re able to compute a Total Revenue figure and different weighted measures. These figures should be consistent with what is already reported in the Business Overview on Guest Payments, Host Fees and e-deposit reports. However, at this stage we do not retrieve the revenue splits (Listing fees, Verification fees & Booking fees for Hosts, etc), though these are already available in the reports mentioned before.
+
+For more details remember that you have much more information in the Data Glossary of the same report, which we encourage to read 😊
+
+
+
+I know what you are thinking… the amount of metrics has grown by quite a bit so now we have a scroll bar in this section 😇
+
+Keep in mind that since all these metrics have somehow a dependency with Xero, these are not available in the current month nor the previous one. So for example, today in mid July we’re able to see the final figures of May, and beginning of August those figures from June. Additionally, we’ve added figures regarding Host Resolutions, and again, these should be consistent with the [Accounting Reports](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md).
+
+For the remaining metric, namely Billable Bookings, at the moment it’s not available. The main reason is that it’s technically feasible to retrieve figures in the same essence as we do for the invoicing, but there’s some considerations behind the logic that should be discussed with Pablo once he’s back from holidays, as he is the owner of the invoicing exporter tool at the moment. The very technical details on this subject can be found [here](https://www.notion.so/Data-quality-assessment-Billable-Bookings-97008b7f1cbb4beb98295a22528acd03?pvs=21). Once this is settled, it should be feasible to move forward with Billable Bookings!
+
+
+
+Lastly, we’ve included a quick-win asked by Suzannah. Now it’s possible to plot all metrics to visually observe trends and evolutions over time. It’s only available for the global view. Hopefully this will facilitate interpretation and reduce a bit of manual work!
+
+# 2024-07-15
+
+## Guest Satisfaction Report is now live
+
+As previously announced, we have completed the Guest Satisfaction report. This report includes ratings given by guests who completed a satisfaction survey after their guest journey. This data is crucial for us to assess our progress in enhancing user experience and identifying areas for improvement.
+
+We are still working on adding new features to this report, but it is already available for browsing. If you need access, please let us know.
+
+
+
+## Business KPIs per Deal is now live
+
+This week we have been working on implementing a good chunk of the expected second batch of deliveries for business KPIs.
+
+One key aspect that we wanted to enable is having the possibility to visualise all these metrics for each Deal Id. With this new delivery we have two new tabs in the report that allows for tracking the metric evolution for a single deal, as well as being able to compare the metrics of two or more deals in the same month. The screenshots will probably be more self explanatory:
+
+
+
+Detail for the Deal 000…000 over time. We can see in a monthly basis the evolution of metrics for this single deal. This one looks like a test account, so most of the metrics do not report actual values.
+
+
+
+Comparison of metrics for multiple deals at end of January 2024. It seems we have a Deal that is going to churn!
+
+This new delivery also includes new metrics related mostly to Guest Payments/Revenue, as well as some minor changes in the report.
+
+The development is still ongoing for other metrics, such as host revenue (that we will call operator revenue) and host resolution payments metrics. Stay tuned to read more about it!
+
+## Great effort towards handling ad-hoc requests
+
+It’s been a bit more than a month since we started with the new [way of handling ad-hoc data requests](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) through the #data channel. And a lot of people is using it, so thanks to everyone for following the procedure!
+
+However, the amount of tickets that have been created over these past weeks have increased at a faster pace than what we are able to accommodate - even though some of them we’ve resolved, we needed to focus on other priorities as you can read in the history of the Data News. The stockpile of tickets was growing quite a bit lately…
+
+So we decided to put a bit more effort on these ad-hoc requests this week, specially with the massive help of Joaquín. Specifically, we worked on:
+
+- K Suites analysis on cancellations ratios,
+- Deposit/Waiver price analysis for Dashboard V2,
+- Pricing tier analysis for Checkin Hero fake door A/B test,
+- List of incomplete verifications for a client asking for it,
+- Extraction of Verification Payments configurations,
+- … and we’re still working on the booking/verification source split
+
+
+
+There are still some tickets on our backlog - we have not forgotten them. But hopefully, this has reduced by quite a bit!
+
+# 2024-07-08
+
+## First set of business KPIs delivered, moving to the second batch
+
+This week we validated with the TMT the delivery of the first batch of business KPIs, which mostly consists of very high-level metrics and the capability of visualize both the historical figures and the ongoing ones via a month-to-date computation.
+
+The work on this subject however is far from over: our effort resides now on computing and reporting the second batch of KPIs. This batch mostly contains high-level revenue metrics, as well as related weighted measures. We also take the opportunity to include a few more metrics to support the general comprehension of the business that we started on the previous batch. Additionally, thanks to the creation of the [Host Resolution payments report](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) of last week, we will be able to include the first metrics on resolutions as well.
+
+Lastly, this week we started setting the foundation to allow the possibility to have all these metrics for each deal, in essence, what we call the “by deal” view, to complement the global view. It’s still work in progress and we’re currently discussing internally on what is the best way to have this modelling in the DWH with our Lead Data Engineer @Pablo Martin before he leaves on holidays!
+
+
+
+For more details about the second batch of KPIs, you can check the following Notion page:
+
+[Business KPIs Definition (II) - TMT session 3rd July 2024 ](https://www.notion.so/Business-KPIs-Definition-II-TMT-session-3rd-July-2024-36696d11d29a442d9b85a925dfc071b2?pvs=21)
+
+## Check-in Hero report updates
+
+This week we bring new updates for the Check-in Hero report with some additions to the existing ones in the dashboard and a new report inside the Dashboard.
+
+For the new addition we have the Host Rates tab inside Overview, in here you can see the rates of hosts that have Check-in Hero available in their listings, the evolution of this rate across time and how many of them have guests that have purchased the cover. This will be a very useful view especially when we have more history with Check-in Hero to see how it has been evolving and how attractive it is for our guests or if it might be more focused on some specific hosts for some reason.
+
+The new report in the app is called **Hosts Details**, in this ****report you can find detailed information on the hosts that have Check-in Hero available, the amount of guest journeys they have and how many Check-in hero have been purchased by their guests. Related to the previous included tab but with more detailed data on each individual host and how much income each of them have generated through Check-in Hero.
+
+[Link to the report](https://app.powerbi.com/groups/me/apps/14859ed7-b135-431e-b0a6-229961c10c68/reports/b88e01e0-d2ad-4911-9cec-cb9fbd8ae840/d39d985ecac6a5971aad?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi)
+
+## CSAT Survey report WIP
+
+We have some data from some guests that get shown a satisfactory survey. We are in the process of building a new PBI report that was previously being delivered in a small excel presentation. In this report you will be able to see ratings and comments given to us from our guests users, which services did they pay for and how are these scores distributed across time or age.
+
+
+
+Expect to hear more about this report in the upcoming week
+
+## Xero Resolutions report is fully ready
+
+Just a quick heads-up: after some [very useful troubleshooting with Jamie during the past week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), our new resolutions host payment report is ready for everyone’s eyes. As a reminder, this reports tracks our accounting books to give you Data on all the payments we make to hosts as part of approved claims, both in aggregates and in detail. And also, by Deal Id!
+
+
+
+If you would like to have access to this report, feel free to get in touch.
+
+## Currency rates ingestion progressing, almost there
+
+After our [updates from last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we’ve been simply loading more and more rates from the history. Currently, we have all rates for all currencies for all 2024 YTD, all of 2023 and all of 2022, which should cover most of the reporting and finance needs we have nowadays.
+
+We are still working on 2021 and 2020. Once we have the rates all the way to January 1st 2020, we will call this line of work done!
+
+## Pablo protects his frail heart from scare-induced heart attacks
+
+Recently, we experienced multiple technical issues ([such as this one](https://www.notion.so/20240619-01-CheckIn-Cover-multi-price-problem-fabd174c34324292963ea52bb921203f?pvs=21)) related to the integration between our [Core database](https://www.notion.so/Superhog-Core-Database-70786af3075e46d4a4e3ce303eb9ef00?pvs=21) and [the DWH](https://www.notion.so/DWH-78ce5f76598d49d185fa5fc49a818dc4?pvs=21). These issues are relevant because they tend to mess up the reporting in different ways. Sometimes it’s just data moving a bit slowlier than usual, sometimes its numbers becoming completely bogus. The worse times is when numbers are wrong, but in sneaky ways that might trick the reader into believing they are true. These are the worse situations that mess with Pablo.
+
+The root causes of some of these are related to ways of working with the data and tables in Core itself, so we decided to make a small document to remind everyone who works on our beloved SQL Server database on how the Data Team depends on it and certain actions can wreak havoc.
+
+You can find the document here: [Careful with the DB: How to work in SQL Server without giving Pablo a stroke](https://www.notion.so/Careful-with-the-DB-How-to-work-in-SQL-Server-without-giving-Pablo-a-stroke-405c497b76c74bb29dcc790bc59928fd?pvs=21)
+
+Hopefully we can learn from these to reduce our rate of incidents in the future and protect poor Pablo’s frail heart ✌️.
+
+# 2024-06-28
+
+## New Currency Exchange report
+
+Now that we finally have currency data from xe.com, we created a new report in Power BI so that everyone has easy access to this data. Here you can find exchange rate data for any specific date and pair of currencies (currently we have the top 8 most used currencies from our user).
+
+Some points to consider, we are still working on filling all the historical data of these currencies. Right now we have from 1st of December 2023 to date, but we will keep on filling until we get to 1st of January 2020. Also we have some very basic forecasting and back filling of the rates using the most recent and latest data from [xe.com](http://xe.com) and pushing it.
+
+If you need access to the report please let us know and as always feel free to reach the data team for any questions or concerns regarding the report.
+Link: [Currency Exchange](https://app.powerbi.com/groups/me/apps/10c41ce2-3ca8-4499-a42c-8321a3dce94b/reports/fcfd0a77-6c2a-4379-89be-aa0b090265d7/64ddecd28ca50dc3f029?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi)
+
+
+
+## Upgrades in [xe.com](http://xe.com) rates ingestion
+
+This week we’ve had a couple of new things going on with our [xe.com](http://xe.com) integration.
+
+First, as [we discussed the last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve upgraded our subscription so that we have 30,000 API calls instead of 10,000. This will help us fill our historical tables at a much faster rate, and also make the integration just more easy to work with in the future.
+
+We also had a working session with the Finance team were they rightfully pointed out that we needed more decimal precision in the exchange rates. We were stupidly losing precision as we fetch the rates
+
+We have made a new release of `xexe` that now has the right decimal precision, so rates will start coming in with the required decimal positions from now on.
+
+## Xero Resolutions payments report fresh out of the oven
+
+We finally got one of our new Xero reports ready after [the past few weeks of work](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md): the resolutions - host payments report!
+
+This report uses our accounting records to track all payments made to hosts as part of approved resolution processes. Even though it doesn’t provide details on the actual incident, at least we can report on how many payments happened, the $$$ amount and which host the payment is related to.
+
+We are still validating the data together with Jamie before opening up access. If you would be interested in the data in this report, do get in touch so we can provide you with access.
+
+## Data Team are a bunch of sloths
+
+As proven by the fact that summer is coming and we will take some days off 😎
+
+We just wanted to give you a heads-up so you are aware of when we will be off. We are already planning our capacity taking these into account, but it might be good for you to know in case you need something from a specific team member:
+
+- Pablo: OOF from 10/07 to 25/07.
+- Uri: OOF from 12/08 to 16/08
+- Joaquín: will be around our July and August.
+
+## KPIs reporting available in Business Overview
+
+This last Wednesday we’ve finally published the Main KPIs report in the Business Overview Power BI application. As explained in the [previous editions](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) of the Data News, we’ve been working on the Main KPIs subject for a while, and this week we’ve finished the remaining aspects to cover the first batch of deliverables.
+
+Specifically, this week we’ve added the Listings and also the Deal lifecycle metrics - that we changed from an original Host/PM approach, to account for the unique B2B entity that can be considered as our clients. Additionally, we created a Data Glossary to explain how the different metrics are computed and their definitions, as well as general comments about the data itself.
+
+
+
+Main KPIs report available in Business Overview
+
+Following this first batch we aim to capture feedback and continue the work towards the second batch of deliverables, that we will re-align on with the TMT this week.
+
+## Listings and Deals lifecycle
+
+In order to develop the first batch of business KPIs, at Data team level we’ve taken the opportunity to propose a first approach on the lifecycle evolutions of two of our main entities at Superhog: Listings and Deals.
+
+We have defined a set of 7 states that will allow to determine, for each listing and deal, the lifecycle state in a given month - yes, not only the current one, but all history since it was created! Here’s the list of states:
+
+1. **New**: Listings/deals that have been created in the current month, without bookings.
+2. **Never Booked**: Listings/deals that have been created before the current month, without bookings.
+3. **First Time Booked**: Listings/deals that have been booked for the first time in the current month.
+4. **Active**: Listings/deals that have booking activity in the past 12 months (that are not FTB nor reactivated).
+5. **Churning**: Listings/deals that are becoming inactive because of lack of bookings in the past 12 months.
+6. **Inactive**: Listings/deals that have not had a booking for more than 12 months.
+7. **Reactivated**: Listings/deals that have had a booking in the current month that were inactive or churning before. After the 2nd booking during the reactivation month, will be categorised as Active directly.
+
+Additionally, in order to measure the activity of a listing/deal with different recencies, we’ve set up the following 3 flags:
+
+- **Has the listing/deal been booked in 1 month?**: If a listing/deal has had a booking created in the current month
+- **Has the listing/deal been booked in 6 months?**: If a listing/deal has had a booking created in the past 6 months
+- **Has the listing/deal been booked in 12 months?**: If a listing/deal has had a booking created in the past 12 months
+
+If you want to deep-dive on this lifecycle as well as get a high-level overview of the current volumes that applied to each state, we invite you to check the dedicated Notion page:
+
+[Listing & Deal lifecycle - 2024-07-29](https://www.notion.so/Listing-Deal-lifecycle-2024-07-29-4dc0311b21ca44f8859969e419872ebd?pvs=21)
+
+# 2024-06-21
+
+## Currency Rates Updates
+
+This week our integration with [xe.com](http://xe.com) has been running nicely after last week’s deployment. Every day, we fetch new rates from it.
+
+Besides that, we’ve been working on a Power BI report to make the rates accessible to the Finance team and generally, to anyone who needs it. We expect to have it ready and open up access during next week.
+
+Finally, we also encountered a blocker. The Subscription level we have purchased only allows to fetch 10,000 rates per month. This is enough for our daily loads, but we will need way more records to recover all the historical rates between now and the past years of Superhog history (we’re planning on fetching rates up to January 1st 2020). We will discuss with Ben R. upgrading our subscription once he’s back so he can have more firepower to fetch the needed rates.
+
+## Xero Reporting in progress
+
+Following our [planning with the Finance earlier this month](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve started work on some reports to be built on top of Xero data. These reports will leverage [our integration between Xero and the DWH](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), which enables us to visualize Xero data on Power BI.
+
+We took the chance when Jamie D. visited the Barcelona office early in the week to refine the final details of these reports. We are currently working on a Claim Payments report that will allow us to track how much Superhog has paid to Hosts and PMs to compensate for damages. This data will be visible across Deals and time, so it will be a very interesting report for many areas within Superhog. More news on it next week.
+
+## Check-In Hero reporting incident
+
+This Wednesday we also had a bitter situation. A technical incident caused the CheckIn Hero reporting suite to show inflated numbers in counts of sales and revenue. Luckily, the figures differences were small (since the figures as of today are still small), and the Data team fixed the issue fast. You can read more about the incident here: [20240619-01 - CheckIn Cover multi-price problem](https://www.notion.so/20240619-01-CheckIn-Cover-multi-price-problem-fabd174c34324292963ea52bb921203f?pvs=21).
+
+Even though we got lucky this time, this incident could have led to massively wrong reporting and was only caught by chance this time. We will be aiming to run some meetings with the involved parties to prevent mistakes like this in the future, as well as improve our own detection systems to raise these issues as soon as possible when they happen.
+
+## Check-in Hero report updates in progress
+
+We are working on adding more valuable data to the Check-in Hero report, after the inclusion of rates with our guests now we are adding more information about hosts and how is the application of Check-in Hero option evolving and how effective has been.
+
+
+
+We are also planning on adding more detailed data about hosts and their listing with Check-in Hero and how is it performing for our business, this is still a work in progress but expect to hear from us very soon with these updates being available.
+
+## KPI implementation is in progress
+
+Following [last week’s progress](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) on the KPI definition and implementation, this week we continued this effort.
+
+Firstly, we have finalised the implementation of the main Guest Journey metrics, which mainly consist of how many Guest Journeys have been created, started and completed. From these metrics, we are also able now to monitor the evolution of the Guest Journey start rate and completion rate every day on the Month To Date tab, as well as retrieve the past history. For instance, these are the historic values for # Guest Journey Created, Start Rate and the # Guest Journey Started.
+
+
+
+Hm… looks like we have an outlier somewhere, can you guess which one?
+
+Secondly, a small but interesting addition of the cancelled bookings allow us to have a clearer picture of how our business operates.
+
+Additionally, we have been working on revamping a bit the look and feel of the dashboard that step by step gets more and more filled up. This is how it currently looks, with the addition of the metrics mentioned above:
+
+
+
+All the available metrics so far - more to come 😎
+
+Lastly, we’ve started working on the Host/Listings lifecycle - from creation, activation, activity, churning, etc. At this stage it’s pretty much still work in progress, all the magic being hidden at the moment in the DWH… So for this you’ll need to wait until the next edition of the Data News 😮.
+
+# 2024-06-14
+
+## Advances in KPI definition with TMT
+
+This week we experienced great advances on [the KPI definition](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) - and even in the implementation -, as we had the chance to present the Data proposal for KPIs to the TMT.
+
+In a nutshell, we’re following a process of batch-delivery: each batch can contain a set of metrics that we will start measuring, as well as a set of ways to represent and filter the data. Each batch will be delivered incrementally, and as we advance on the implementation we will also take the opportunity to refine and agree on following batches.
+
+At this stage, we aim to provide a first batch of 21 KPIs by the end of June, with a month-by-month overview, to keep track of the historic figures, and a Month-to-Date (MTD) overview, to better anticipate the landing of the KPIs by the end of the month while it’s on progress. This first batch will mostly contain high-level volume-based metrics, and we aim to include more Revenue-based figures for the second batch.
+
+
+
+A draft of how the new dashboard is looking so far - still quite empty for the time being 😄
+
+Additionally, as part of the KPIs exercise, this week we have also implemented an easy way to estimate - yes, ***estimate*** - when the guests start and complete the guest journey, that would enable us to easily extract and represent both historical and current month Guest Journey related figures.
+
+For more details about the first batch of KPIs, you can check the following Notion page:
+
+[Business KPIs Definition - TMT session 12th June 2024](https://www.notion.so/Business-KPIs-Definition-TMT-session-12th-June-2024-3b32b4c2c2904cdf89abdeea536332fb?pvs=21)
+
+## `xexe` is now in production
+
+This week we’ve finalized the implementation of `xexe`, our internal tool to read rates from the [xe.com subscription that we purchased last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md). `xexe` bridges the gap between xe.com and our DWH, loading rates into it on a daily basis.
+
+This is a great step forward in our [[XE.com](http://XE.com) Project](https://www.notion.so/XE-com-Project-b5001ea18b634519b814cb8014e36581?pvs=21). We will continue work with leveraging the rates within the DWH and also opening up access and providing training and support for different stakeholders. Stay tuned for more updates.
+
+For developers and other technical profiles, you can review the tool’s [repo here](https://guardhog.visualstudio.com/Data/_git/data-xexe).
+
+## Upgrading the Check-in Hero Report
+
+We have updated the [Check-in Hero report](https://app.powerbi.com/Redirect?action=OpenReport&appId=14859ed7-b135-431e-b0a6-229961c10c68&reportObjectId=8e88ea63-1874-47d9-abce-dfcfcea76bda&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=ReportSectionddc493aece54c925670a&pbi_source=appShareLink&portalSessionId=2763b723-4cd5-4d70-9a3f-229730ceaa8b) to better visualize the funnel flow and included a new tab that shows both the conversion rate of total guest journeys that offer check-in hero vs total guest journeys, as well as the rate of check-in hero purchased vs total guest journeys that offer check-in hero.
+
+Take a look at it to check these new information and know that we are still working on it to keep adding more relevant data.
+
+
+
+# 2024-06-07
+
+## Data Team organisation: alignment with TMT
+
+This Thursday, the Data Team had an alignment session with the TMT on the team organisation [we talked about a couple of weeks ago](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) and the main priorities for the following months. In a nutshell, our aim is to foster a Data-Driven culture within Superhog.
+
+In terms of how we organise work, we identify 3 main lines of work, namely: Maintenance, Projects and Ad-hoc requests. The following table summarises it:
+
+| **Type** | **Nature** | **Tracking** | **Time allocation** | **Estimation / priority** | **Examples** |
+| --- | --- | --- | --- | --- | --- |
+| Maintenance | Reactive, ensuring data systems and processes work well | DevOps | No constraint | Top priority, unplanned | Data pipeline issues, outages, data quality, availability of critical reports |
+| Projects | Long-term projects to build data products to contribute to the strategic value | High-level ProductBoard
+
+Low-level DevOps | No constraint | High priority, Planned following Agile iterative approach | New data pipelines, dashboards, data gov frameworks, alerting, A/B tests |
+| Ad-hoc Requests | Short-term, small tasks | DevOps (for non-trivial) | Data Captain, max. 10h/week | Priority based on common sense. No estimation | Run queries, small insights, quick report |
+
+The main initiatives will be placed in our [Data Roadmap](https://superhog.productboard.com/roadmap/8171557-7-data-roadmap) in ProductBoard, knowing that this roadmap is likely to change as new topics and priorities arise.
+
+***Ok but… how can I submit requests for the Data Team?***
+
+We have implemented a slack Data Request workflow in our [#data](https://superhogteam.slack.com/archives/C06GFGHJD7H) channel that will centralise the request in-take! Check the screenshots below:
+
+
+
+
+
+Each week, someone from the Data Team will be designated as the **Data Captain**, and this will be the person that will manage the incoming requests through our Data slack channel. Ahoy, Data Captain! 🚢
+
+For more details on the organisation, you can check: [Data Team Organisation](https://www.notion.so/Data-Team-Organisation-81ea09a1778c4ca2ab39e7f221730cb5?pvs=21)
+
+If you’re interested into knowing the main Data priorities, you can check our OKRs: [Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+Ah! And we also have a Data Team official photo, by our photographer Alex Simon!
+
+
+
+*Pablo, Joaquín and Uri working very hard to keep their eyes open at the moment of the picture*
+
+## Defining KPIs: first steps
+
+This week we also resumed the KPIs definition subject. In this long-term project, we aim to provide the key metrics that would help TMT and the main business areas to understand how the business is doing and take better decisions based on facts.
+
+We already started refining the activity of listings a few weeks ago, as well as finalising the implementation of the main sources of revenue. This week though, our focus was more on laying the foundations for the broad KPIs definition. Our current priority is to set the main, company-wide KPIs, while in further iterations we will continue going into the details of each product/business area.
+
+
+
+The hog, busy setting up the KPI control room wirings.
+
+So far, we conducted 2 internal workshops in the Data Team with 2 different approaches: a detailed refinement of existing high-level metrics and needs, and a more top-down approach from scratch. Lastly, we’ve also checked the key metrics needed for Superhog OKRs, since the nature of these can actually help on determining the main priorities on metric implementation.
+
+Next week we will have the first high-level workshop with the TMT, so stay tuned!
+
+## Survey report for Marketing
+
+We obtained data from a survey with around 4000 entries, sent to different hosts and managers across the US, covering many interesting topics for the business. With this data, our team built a report in Excel to make the analysis of this data easier to digest and interpret.
+
+This report delves into a variety of topics like trends, challenges, and opportunities for the business. From customer relationships to operational preferences, there is a lot of data available to work within this report.
+
+If you are interested in taking a look at it, please give us a shoutout, and if needed, we can arrange a meeting to explore the report in detail.
+
+
+
+## [XE.com](http://XE.com) subscription ready, work has started
+
+This week we’ve started technical work on integrating [xe.com](http://xe.com)’s currency API into our systems. This API will allow us to fetch currency exchange rates in an automated way to feed our DWH and other systems. This way, we’ll be able to perform currency conversion in all of our reporting and many other processes.
+
+
+
+The piggies invading the FX trading floor.
+
+This week we purchased a yearly subscription to the service and we are now busy building the necessary code to fetch the rates into our DWH. We have baptised the tool we’ll use for it as `xexe` (*shay-shay*). If you are technically inclined and feel curious, you can checkout [the repository](https://guardhog.visualstudio.com/Data/_git/data-xexe).
+
+You can keep track on the progress of this line of work here: [[XE.com](http://XE.com) Project](https://www.notion.so/XE-com-Project-b5001ea18b634519b814cb8014e36581?pvs=21).
+
+## PMS volume figures export available
+
+This week, starting from [a request by Claire](https://guardhog.visualstudio.com/Data/_workitems/edit/16919/), we’ve done some number crunching to compute some figures around how big each of the different PMSs that plug into Superhog’s platform is. The result is a simple yet interesting excel export showcasing the data. This includes aggregations such as counts of active listings, created bookings or host and guest revenue.
+
+If you would like to request access to this data, please get in touch with you to share the data.
+
+## Priorities aligned and refined with the Finance team
+
+As you can see in [our roadmap](https://superhog.productboard.com/roadmap/8171557-7-data-roadmap), Finance is going to get a lot of love from the Data team in the upcoming weeks. We have multiple joint lines of work open to improve reporting on financial figures and streamlining some business processes. Some of these topics include:
+
+- Formalizing and implementing proper processes to invoice e-deposit customers
+- Formalizing and implementing proper processes to invoice new screening API customers
+- Creating a new suite of Xero-driven reports: Damage Waiver payouts,
+- Integrating Currency rates into some reports and processes
+- Providing general support across general invoicing and new dashboard topics related to Finance and Invoicing
+
+After some sessions with the team, now we have a long road ahead of work to do to get this done. We’ll keep updating around here on our advances as they happen.
+
+# 2024-05-30
+
+## Waiver payouts now present in Business Overview
+
+Our [Business Overview reporting suite](https://app.powerbi.com/Redirect?action=OpenReport&appId=33e55130-3a65-4fe8-86f2-11979fb2258a&reportObjectId=01d5648d-1c0b-4a22-988d-75e1cd64b5e5&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=ReportSection57283b6e80c2d286de47&pbi_source=appShareLink&portalSessionId=248e0b8a-7246-4ec4-a1b4-b82ee7a564e0) now displays the amounts paid back to hosts as due waivers for hosts that take the waiver risk. This is an improvement from [the previous version](https://app.powerbi.com/Redirect?action=OpenReport&appId=33e55130-3a65-4fe8-86f2-11979fb2258a&reportObjectId=01d5648d-1c0b-4a22-988d-75e1cd64b5e5&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=ReportSection57283b6e80c2d286de47&pbi_source=appShareLink&portalSessionId=248e0b8a-7246-4ec4-a1b4-b82ee7a564e0), which only showed the total charged to guests.
+
+The visuals show simultaneously the total amount we charged guests and the total amount we sent back to hosts, making it simple to see what’s ours to keep.
+
+## Xero-based fees reports for Guesty
+
+[Last week we made a release](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) in [Business Overview](https://app.powerbi.com/Redirect?action=OpenReport&appId=33e55130-3a65-4fe8-86f2-11979fb2258a&reportObjectId=0642f366-c243-4879-8228-d8d6cc78f266&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=95e1c6dfd47615e58712&pbi_source=appShareLink&portalSessionId=248e0b8a-7246-4ec4-a1b4-b82ee7a564e0) to show the fees we should theoretically charge Guesty and e-deposit customers based on the data available in our [Superhog Cosmos DB](https://www.notion.so/Superhog-Cosmos-DB-b76557e0b49149cf8cbfed7309e16ac6?pvs=21) .
+
+To complement this report, we now also show the revenue we have effectively charged Guesty according to Invoices/Credit notes existing in [Xero](https://www.notion.so/Xero-8085e30c86624af48ca39ca047b6dffe?pvs=21) .
+
+## Refining invoicing exports for the new Screening API
+
+The API team is moving forward with our new screening API. This week we started some refinement discussions with Ana to better understand the data structures and the data that we will need to pull out so that our Finance colleagues can invoice properly the customers of this API.
+
+Ana dropped [this wonderful documentation](https://www.notion.so/Screening-API-Project-final-b99730e6962545a389c5f838b0bcbcb4?pvs=21) that will help a lot. We will tackle this during [upcoming weeks](https://superhog.productboard.com/entity-detail/features/27338025) as the technical and commercial launch of the service closes in.
+
+## CTO and senior devs greenlight to integrate CosmosDB with DWH
+
+As work on APIs and Resolutions increases, there are more and more expectations around reporting needs for those areas. Currently, we are facing a tricky situation because Airbyte, our preferred tool for data extraction, does not have native support to extract data from CosmosDB, which were a good chunk of the data related to those functions lives. Thus, integrating CosmosDB and DWH will take some effort.
+
+We have planned some capacity for this topic in June. But, as a first step, we already sat down this week with Ben R., Ray and Manu to discuss the role of CosmosDB in Superhog’s technology in the long term, as well as the technical side of things and possible architectural patterns we could follow.
+We still haven’t settled for one technical approach to implement this, but we all agreed on the fact that it makes perfect sense to go for the integration if we find one good way to do it.
+
+Next week we expect to start doing some whiteboarding, research and testing to better judge our options and hopefully come out with a design for how to implement the integration. We will keep posting updates around here, so stay tuned.
+
+## Automated Alerts in Slack
+
+The Data Team generally enjoys *not* having to do things, also known as being efficient. As part of it, we try to automate a lot of things in our infrastructure. We automate the extraction of data from many of our systems, like the Superhog backend, Stripe or Xero. We also regularly automate processes and transformations that run inside the DWH.
+
+This is all nice and smooth, until one day something breaks. If something breaks, step number one is to be aware of it. Now that the size of the team has increased, we decided to invest a bit of time in monitoring automatically the success or failure of many of our automated jobs. Without this, it would be easy for something to slip and for the team to drop the ball.
+
+
+
+The Data Team, busy putting out fires.
+
+Since this week, all our Airbyte and dbt jobs will drop messages in some slack channels so we can monitor them. That way, we can know for sure when we can lay back and relax doing other stuff 😎.
+
+# 2024-05-24
+
+## Data team grows: meet Joaquín
+
+This week we celebrate an exciting onboarding! The analysis muscle gains some extra strength with Joaquín’s arrival.
+
+Joaquín is originally from Chile and came to Barcelona to pursue an MSc in Data Science. A Civil Engineer by education, his career in Walmart led him to pivot towards Data. Hence why he joins us as a Data Analyst.
+
+Joaquín will be based in Barcelona. You can find him by searching in Slack for `joaquin.ossa`
+
+
+
+Welcome Joaquín!
+
+## New Data Team organization coming soon
+
+With Uri and Joaquín around, the data team enters a new phase and leaves behind the era of Pablo playing one-man orchestra.
+
+
+
+Pablo parting ways with his one-man orchestra phase.
+
+The new team will be able to deliver much more and bring a new wave of capabilities to Superhog, but it also requires more organization and better processes to ensure everything is properly lubed and working.
+
+As part of this, Uri and Pablo are already working in defining the new organization for the team. Once the new structure is defined, we will communicate it company wide so that we can all be aligned with our new ways of working. Stay tuned!
+
+## E-deposit and Guesty revenue available in Business Overview
+
+This week we started recovering the revenue information for the first 2 API sources: Guesty and E-deposits. The figures of revenue are now available in the [Business Overview reporting suite](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21). These two sections display the information of bookings and the respective revenue coming directly from the [Cosmos DB](https://www.notion.so/Superhog-Cosmos-DB-b76557e0b49149cf8cbfed7309e16ac6?pvs=21) backend source. Special thanks to Ana for her support on understanding the logic behind it!
+
+
+
+## Xero Booking, Verification and Listing net fees are now available in Business Overview
+
+After [last weeks release](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we’ve done some more work on the Xero data inside the DWH. This has allowed two new improvements:
+
+- Besides Listing and Verification fees, Booking fees are also now available in the reporting suite.
+- We are now able to display *net ****fees*. This means that revenue figures now take into account that we don’t just invoice our customers, but also credit them back. The new charts show the invoiced amounts with the credited amounts subtracted, showing a more realistic picture of what’s the final revenue for Superhog.
+
+# 2024-05-17
+
+## CheckIn Hero revenue available in Business Overview
+
+This week we extended a bit our efforts on the [reporting around CheckIn Hero](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to also include the juicy revenue figures into our [Business Overview reporting suite](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21). You can now find it in the Guest Payments section, along with the Waiver and Fees revenue sections.
+
+
+
+## First release of Xero host fees
+
+The [Business Overview](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21) has also received two new data points: we are now finally leveraging our [integration between Xero and the DWH](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to show you fees invoiced according to our accounting books.
+
+We’ve begun with Listing and Verification fees, which are now available in the Host Fees section. Our confidence in the figures is high, although there might be small adjustments in the next days as we run some checks.
+
+
+
+Booking fees will soon follow, but we are still working on some data quality issues that we want to address before releasing the data to avoid misinformation across the organization.
+
+## Guesty and e-deposit fees in the works
+
+Besides the previous updates on revenue figures, we are also working on running some automated calculations on the theoretical revenue for e-deposit clients in general, as well as for Guesty. We will feed these straight from our [Cosmos database](https://www.notion.so/Superhog-Cosmos-DB-b76557e0b49149cf8cbfed7309e16ac6?pvs=21) into PBI reports that will run the numbers on the different nightly fees according to the activity of our customers. Stay tuned for more updates.
+
+# 2024-05-10
+
+## CheckIn Hero Reporting is now live
+
+This week our beloved Guest Squad has put CheckIn Hero live and our first customers are being offered this new cover. There was even a sale on the very same launch date!
+
+The Data team has also collaborated with this project by delivering a first reporting suite to monitor the evolution of the product. The reporting suite shows key KPIs of the product: revenue, conversion rates, outstanding risk, etc. The data is refreshed automatically by our Data Platform and shown with a <24 hours latency.
+
+
+
+A sample of the CheckIn Hero reporting, showing our first sale.
+
+This reporting suite raises the bar. In the past, analysing the performance of our products has been a challenge that implied a lots of manual work and significant time lags between facts happening and them being reported. With this suite, we are able to monitor the key metrics of CheckIn Hero automatically and pretty much as they happen. In the Data team we are delighted with this delivery and we look forward to ensure all of our operations enjoy levels of data maturity at least as good as this example.
+
+Finally, congratulations to everyone involved in launching CheckIn Hero!
+
+
+
+CheckIn Hero to the rescue.
+
+## Data team grows: meet Oriol
+
+This week we celebrate an exciting onboarding! Oriol joined the Data team this Friday to help us leverage Data Analysis to improve Superhog. We are thrilled by the cool stuff we will be able to achieve with Oriol’s help.
+
+Oriol comes with a strong background in Data Science, ecommerce and digital analytics. He was previously a Data Science Manager in Veepee, where he and his team focused on tailoring customer experiences to boost sales.
+
+Oriol will be based in our Barcelona office, so do look for him around to present yourself. If not, feel free to ping him on Slack! (oriol.roque)
+
+
+
+# 2024-04-19
+
+## Xero integration with DWH
+
+Great news! After our research during the past week, we have finally pulled the trigger and we have now integrated our [Xero](https://www.notion.so/Xero-8085e30c86624af48ca39ca047b6dffe?pvs=21) tenant with the DWH. This gives us the ability to pull data from our accounting system into our DWH so we can build reporting on top of it. The most interesting part will probably be reporting on our revenue by leveraging the invoices that our Finance team builds with a lot of effort.
+
+## Currency Exchange Rates adoption is getting started
+
+Superhog is a multicurrency company: we operate on many regions and accept payments in several currencies, both from guests and hosts.
+
+Managing this is a challenge. Handling multiple currencies means that prices, amounts, payments and many other pieces of financial data can come in a variety of units. So, answering questions like “what’s the average booking fee for our customer base?” or “what’s the total revenue we had in waivers last March?” always generate the need of converting a lot of different amounts in multiple currencies into a single one.
+
+To succeed at managing this challenge, we need to have a complete, up-to-date and easy-to-access database of conversion rates. So that our reports, our processes and our people can always have the right rate handy. Today, this is a capability that we are lacking.
+
+We have done some pre-alignment with Finance, Product and Engineering, but Data will be leading this effort. We will soon document and share our plans so you can stay up to date.
+
+
+
+The hog, trying to convert some dollars to good old pounds before heading to the pub.
+
+# 2024-04-05
+
+## Exploring integrations with Xero
+
+This week we’ve worked together with the Finance team to explore our options in terms of programmatically creating invoices for our hosts. The creation of invoices in Xero, our accounting system, is currently manual. Creating +1000 invoices by month manually is not the most pleasant, efficient or error-proof way of running this, so we
+
+Our outlooks are positive: it seems Xero does have the capabilities required to make this happen, and thus we think the integration is feasible. In the following weeks we will organize work around this.
+
+
+
+The hog, wishing it was doing something more interesting than creating invoices in Xero.
+
+## Working on Booking fees reporting
+
+This week we’ve also been busy on working on Booking Fee’s calculation within our DWH. These are numbers we usually run as part of our monthly invoicing process outside of the DWH, but doing them inside the DWH enables us to quickly run them for any period of time in the past on demand.
+
+We are close to having it ready, but we’re not fully there yet. As soon as the data model and related reports are ready, they will become available in the [Business Overview](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21) reporting suite. Stay tuned.
+
+# 2024-03-22
+
+## Waiver stats now live in Business Overview
+
+This week we have managed to deploy a new tab on our [Business Overview](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21) - Guest Payment reports. This tab tracks the Waiver payments that we have processed through our backend.
+
+
+
+There are some caveats on the data, all listed on the report itself so you can take them into account.
+
+## Data Quality Audit
+
+This week we also ran some Data Quality checks to ensure proper invoicing during April. The checks revolved around ensuring that pricing of new accounts and accounts finishing free trials were up to date.
+
+Thanks to Kayla, Alex and all the AM team for cleaning things up.
+
+# 2024-03-15
+
+## A new reporting suite has come to life 🚀
+
+This week we’ve finally deployed a new Reporting Suite built with Power BI: the Business Overview.
+
+The goal of this suite is to act as a central point to monitor the most important metrics on how the business is doing: monitoring how much revenue comes in from different products, counts on users, bookings, listings, etc.
+
+We have started small with the first content we’ve managed to get ready. We will slowly grow the scope of the suite as we clean and prepare more and more data in [our DWH](https://www.notion.so/DWH-78ce5f76598d49d185fa5fc49a818dc4?pvs=21).
+
+You can read more about the suite here: [Business Overview Reporting Suite](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21)
+
+## Still looking for Data Analysts
+
+We are still looking for new members for the data team! We are going through some interviews, but finding good talent is challenging.
+
+If you have any contact in your agenda that you would like to refer to us, we are all ears!
+
+# 2024-03-01
+
+## `sh-invoicing` upgrades and new run
+
+Last week we focused mainly on Finance work. Within that, the big bulk of work was delivered on delivering a few upgrades to our internal tool for generating invoicing reports, `sh-invoicing`. The most important item was upgrading our tool to be able to read from multiple Stripe accounts, since we are now receiving waiver payments in two different accounts. We also added other improvements on the structure of files and fixed a few small bugs.
+
+With these improvements, and with the reduction of Acquired transactions to a pretty much residual size, we are hoping to reconcile all waivers this month automatically without having any of our finance colleagues chewing through a huge pile of spreadsheets.
+
+## Iplicit API
+
+This week we also received access to the API of Iplicit, our new accounting software. This will enable us to develop code in `sh-invoicing` to automatically generate invoices in Iplicit without manual interaction. This will be a big focus during the next week.
+
+# 2024-02-23
+
+## Upgraded e-deposit Report
+
+After some discussions with @Ana de Vega and @Leo, with some support from @Ray, and a bit of work from our side, we have refurbished the old Athena PBI report and turned it into our brand new [e-deposit PBI report](https://app.powerbi.com/Redirect?action=OpenReport&appId=86bd5a07-0cd9-40ab-9e97-71816e3467e8&reportObjectId=91e9961d-0376-4199-a40c-da6fd1d4afcf&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=ReportSectioncac343fda6a46e7ca83e&pbi_source=appShareLink&portalSessionId=abc1f4da-2785-4097-a49e-a05c27027234).
+
+
+
+The new report now shows details for all verifications performed by Guesty or by other customers (when they actually do it, now it’s just an empty table). We also show some aggregates and over time info so that you can monitor how the product adoption is evolving.
+
+## Stripe US ↔ DWH integration in progress
+
+After completing [the integration our Stripe UK account with our DWH](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we are now working on bringing over the records from our separate US account so we can have a unified view of all transactions, independently of on which Stripe account they are taking place.
+
+This is currently a work in progress. The US data is already landing our DWH, but we still have to merge it together with the UK data to provide a simple experience downstream.
+
+## Cancellation Launch: validating data model and preparing to design reports
+
+This week we’ve spent some time together with @Gus and @Lawrence working on the design of the data model behind our new Cancellation Cover product. We were able to align reporting needs and validate that, once the product hits production, the Superhog backend will be storing all the information that we will need for reporting purposes.
+
+During the next weeks we will also be meeting different stakeholders to gather requirements for the data needs around the new product so that we can work on building the necessary data products for everyone.
+
+If you think you will some data on the Cancellation Product, and we haven’t discussed it, please, [get in touch](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21) as soon as possible.
+
+## Leveling the ground for more accounting automations
+
+With the new Iplicit deployment getting closer, we have requested the Iplicit team to provide us API access to our testing environment so we can start looking into the automatic generation of invoices. During the next weeks, we will be developing tools on our side to include this as part of the wider invoicing process so that our finance team can automatically generate for our host customers each month.
+
+# 2024-02-16
+
+## Data infra is now ready
+
+**Data infrastructure is now live!** 🥳🥳🥳🥳🥳
+
+
+
+A hog that is nearly as happy as us.
+
+After some weeks of work, many technical meetings, a lot of tests and experiments in Azure, and many coffees, we finally deployed the production environment for our data infrastructure. We now have a Datawarehouse, a data integration tool (Airbyte) to pick data from different sources and a data modelling tool (dbt) to make it all work. We are now able to ingest data from several sources, clean it, prepare it, analyse it and serve it through reports and other channels.
+
+We will be holding a few meetings during the next couple of weeks to get the word out and provide more details to different stakeholders so we can all align on how this infrastructure will be used in the coming months (and years).
+
+Our heartfelt gratitude to @Ben for the help along the way!
+
+## Stripe UK data now available in DWH
+
+Our first step since we got the DWH and Airbyte running has been to integrate our Stripe UK Account with it. We are currently absorbing data for charges, payment intents and balance transactions into the DWH. We are now able to model this data in any way we need and build reports and exports on top of it.
+
+Note that our Stripe US account is still not integrated.
+
+Do you have great ideas as to how to put this data to work? If so, please do [get in touch](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21) so we can discuss them and add them to our backlog.
+
+# 2024-02-09
+
+## Small `sh-invoicing` fixes
+
+This week we’ve been applying some quick fixes on the [sh-invoicing](https://www.notion.so/sh-invoicing-fdcf47ce663a4ed584593caab53aaa1e?pvs=21) tool to cover for a few edge cases that had slipped through. There are many subtleties in how we invoice and it shows through this little situations that appear every now and then. But thankfully, we keep on spotting and controlling them.
+
+You can read more about these changes in the tool’s [changelog](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/CHANGELOG.md).
+
+## Rehearsing final infrastructure deployment
+
+This week we have finalized documenting and testing our infrastructure design and deployment procedure. We are now happy with the end result and thus, ready to go and deploy in production, which we plan on doing next week.
+
+[It’s been a long journey](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) to get here, but we’re confident it’s worth it. Once we deploy our new tools and start using them, the productivity and capabilities of Superhog in using data will skyrocket.
+
+# 2024-02-02
+
+## Search by Booking ID in PBI reports
+
+If you regularly use our [Bookings report](https://app.powerbi.com/Redirect?action=OpenReport&appId=86bd5a07-0cd9-40ab-9e97-71816e3467e8&reportObjectId=40261ecd-9e80-42ec-8650-40ed0edf56d4&ctid=862842df-2998-4826-bea9-b726bc01d3a7&reportPage=ReportSection&pbi_source=appShareLink&portalSessionId=01a046fd-8d4c-4095-baf3-172dde28fd52) from our [PBI Reporting Suite](https://www.notion.so/Superhog-Reporting-Production-Suite-6da7d7a2c37a43bc9b82802670e46b97?pvs=21), we’ve made a little change that might make your life easier.
+
+You can now search for a specific Booking ID in the filters. With this, you can narrow down the data to one specific booking for which you already know the ID without going crazy playing tricks with other filters.
+
+
+
+## Invoicing tool Baptism of Fire
+
+Finally, after a few weeks of work, it’s time to start a new invoicing cycle with great novelties.
+
+Yesterday we began the invoicing process for January 2024 with [our new tool](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), `sh-invoicing`. We have used the tool to:
+
+- Fetch all necessary data from the app.
+- Export all transaction records from our UK Stripe account.
+- Automatically compute the hosts’ amounts due for waivers processed through Stripe.
+- Automatically perform currency conversions between the payee and host currencies.
+
+Even though it doesn’t sound like much, the last couple of bullets were being done pretty manually until now, which meant someone had to go through *hundreds* of spreadsheets doing tasks like copy pasting from one place to another, applying formulas, etc. We’ve done a great deal of saving time with these already.
+
+
+
+The hog and the new tool, having a cocktail thanks to the time they saved.
+
+More changes and improvements will come next cycle. For now, we will be supporting the finance team during the execution of the current cycle in case errors arise or last minute fixes are needed.
+
+Besides that, as a result of some last minute work done by Clay, Lawrence and Ben R., no more waiver payments will go through Acquired, which means February payments will be fully processed with Stripe. Without going into much detail, this means that processing February will be *way way way* less painful. Thanks for the great work guys!
+
+# 2024-01-27
+
+## New invoicing tool
+
+After [discussing and passing on many of the open issues in our invoicing process](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) at the start of the week, this week we’ve been fully dedicated to starting to develop tools to solve part of the issues.
+
+The first version of this is a command line (CLI) app we have dubbed `sh-invoicing`. If you are on the techy side, you can check the code here: [https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter).
+
+
+
+What running a CLI app looks like.
+
+This app will be a swiss-army knife of sorts to execute different tasks around preparing our invoices. The first tasks it can already do are:
+
+- Exporting account data from Dashboard, like Booking, Verifications or Listings.
+- Exporting transactions from Stripe.
+
+Our next planned feature is making the tool capable of reconciling the waivers paid through Stripe. This is the biggest pain of the current invoicing process, and we hope that tackling it will already make a great difference for our colleagues in finance.
+
+During the next week, we will be running some tests together with the finance team and finally use the tool for the first time to get our invoices for January’s activity.
+
+# 2024-01-19
+
+## Data for 2024 Bookings is now visible in PBI
+
+If you used any of the Power BI reports in the [Core Reporting Suite](https://www.notion.so/Superhog-Reporting-Production-Suite-6da7d7a2c37a43bc9b82802670e46b97?pvs=21) you might have noticed that bookings for 2024 were not appearing. [The issue](https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories/?workitem=12662) has now been dealt with. The underlying reason was some missing data in the [Core database](https://www.notion.so/Superhog-Core-Database-70786af3075e46d4a4e3ce303eb9ef00?pvs=21): we had a dates table that only ran until December 2023 and we had to push that further to 2024.
+
+This is also a good time to remind you that, if you ever see anything wrong or need any assistance around our dashboard, you can always [get in touch with us](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21).
+
+## The Invoicing Reformation moves forwards
+
+This week we have finalized the first phase of the [Invoicing Reformation](https://www.notion.so/Start-here-93981184e2154dee9a4800f51d8c6e89?pvs=21). During this 2-week effort the Finance and Data team have gone through quite a few brain-heavy sessions to go through the invoicing process without leaving any stones unturned. We couldn’t be happier with the outcome:
+
+- We have successfully managed to [document the existing process](https://www.notion.so/As-Is-Documentation-2024-01-22-e75dbf8244274c018a93b46c471fbdc1?pvs=21), which helps dramatically in understanding how should we move forward with improving our way of working.
+- We have composed a [list of issues and improvement opportunities](https://www.notion.so/Issues-and-Improvement-opportunities-ec58b170df9541e2985e3db052c73930?pvs=21) that we will use to guide us forward in how to incrementally make invoicing more accurate, more efficient and more flexible.
+
+
+
+The Data hog making a massive effort to understand how the heck waivers work.
+
+I want to give a big shoutout to Elaine, Jamie and Amanda. They’ve been extremely helpful and we have come out of this with exactly what we need thanks to their effort.
+
+Next week we will start organizing work on improving the invoicing process. We expect to be able to deliver some of them in the January cycle, and to have much, much better tooling by the time we have to process February.
+
+## dbt is working like a charm, preparing for production
+
+Between finance and finance meeting, this week we managed to scratch some time to work on our dbt code runner [as we planned last week.](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md) As part of it, we started the repository where we will store our data models.
+
+We already managed to connect them to the DWH to run some tests and successfully deployed tables there. This means we now have a pretty complete landscape in terms of data capabilities: we can integrate data from different sources with Airbyte, store it in our DWH, work it with dbt and present it through Power BI.
+
+Next week we will document how we deployed and integrated all of these components in order to get ready to deploy them in production and start using them.
+
+# 2024-01-12
+
+## Advances in Data Architecture
+
+Architecture has made some nice progress [since the last time we discussed it](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md). We began the week with some great work and an important milestone: we presented and discussed our proposed infrastructure architecture to Ben R, and we obtained his blessing on our proposal! He was happy with our design and we all agreed on moving forward with it.
+
+Besides that, we did some more work, like creating a connection between PBI and our DWH in its dev environment. With that, we managed to validate that we will be able to build reports on top of the DWH.
+
+Next week we will test the only missing part: a [dbt](https://docs.getdbt.com/docs/introduction) code runner. This element will be responsible for processing the data from our different sources to build clean, tidy and easy to read data models so that we can all build reports on top of great stuff. Once we manage to deploy and run this, we will finish our full documentation on the Data Architecture to get ready to re-deploy it in a Production environment.
+
+## Working through the details of Invoicing with Finance
+
+[As we introduced last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week was the start of a new project to improve our Invoicing process. We have named the project the Invoicing Reformation, a little nod to the [Protestant Reformation](https://en.wikipedia.org/wiki/Reformation). The project has [a little space here in Notion](https://www.notion.so/Start-here-93981184e2154dee9a4800f51d8c6e89?pvs=21) which you can check out if you feel curious.
+
+
+
+During the week, we’ve spent a lot of time together with the Finance team in long and heavy sessions where we deep dived into the details of executing the process. As we advance, we are both [documenting the existing process as it is](https://www.notion.so/As-Is-Documentation-2024-01-22-e75dbf8244274c018a93b46c471fbdc1?pvs=21) today, as well as [identifying and organizing all the existing issues](https://www.notion.so/Issues-and-Improvement-opportunities-ec58b170df9541e2985e3db052c73930?pvs=21) that make the process hard.
+
+It’s been a tough but productive week. The process has many important details, handles a lot of data and currently faces many corner cases, exceptions, data consistency issues and other problems. Next week we will keep discussing some parts of the process with Finance and Engineering before moving on [to the next phase](https://www.notion.so/Start-here-93981184e2154dee9a4800f51d8c6e89?pvs=21) where we will jump into problem solving mode.
+
+
+
+Some live footage of how you look like 5 hours into discussing all the little details of invoicing.
+
+# 2024-01-05
+
+Happy new year everyone!
+
+
+
+## First batch of cataloguing finished
+
+Finally, after some weeks of work, we are happy with the current state of our [Data Catalogue](https://www.notion.so/Data-Catalogue-78d91434aa1442cbb6cc13b73c7fb664?pvs=21) and think is ready to start being useful for all the company! A few bits and pieces will still be added by some colleagues, but the main trunk is already there.
+
+We will soon make a big announcement to get it in front of all colleagues in Superhog, but if you are already reading this, feel free to peek around.
+
+We expect the Catalogue to be a great asset that helps a lot silently. As we grow and build cool stuff, it’s going to be increasingly harder for each of us to know about *everything* that exists in Superhog. Our hope is that the Data Catalogue will prevent work duplication and will help colleagues share their work and leverage existing work from others. We already had our first case of this! The Business Systems was looking at how the heck they could count how many guests are starting the verification journey… and [the product team was already doing that](https://www.notion.so/Mixpanel-Reporting-Suite-8b1f1bd30bab4c8ca33fb3d0c57f7e71?pvs=21).
+
+
+
+You when you don’t waste 20 hours with some data-thingy because you discover someone else has already done it for you.
+
+From now on, this becomes a living thing: feel free to reach out to us to edit and add as the Catalogue becomes outdated. As well as to come in regularly to stay up to date with the existing databases and data products in the company.
+
+## Data architecture is moving to the cloud
+
+Continuing with our [Data Architecture saga](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), we have already started deploying our software and databases in a [development environment](https://dev.to/flippedcoding/difference-between-development-stage-and-production-d0p) in Azure. In doing so, we can continue testing in an environment that is much more similar to how the final production deployment (the one that *you* will be using). This way, we slowly work out the rough edges and ensure that, once we finally go for it, we encounter no issues at all.
+
+This is still a WIP, but we expect to complete a first full round of work by Monday, when we will sit down with Ben R. (our CTO) to get his feedback on security, performance and the general soundness of what we are building. Once we have a final architecture and his blessing for it, we will move on to deploying the real production one that you will be benefitting from.
+
+
+
+Our data pig-services, happily living in the cloud.
+
+## Warming up to turn invoicing from pain to pleasure
+
+Invoicing our customers is quite an important thing: we won’t be paying our bills unless we do this timely and accurately.
+
+As Superhog has been growing, invoicing has also been turning into a more complex and challenging process. Our colleagues in Finance are currently feeling the pain because running the numbers for each customer is now far from trivial, and it’s taking a tremendous amount of effort and hours.
+
+TMT has decided it’s time to turn this upside down and make invoicing swift. To achieve this, we will run a multi-week project to crawl out of the current situation into a simpler, more automated and scalable process.
+
+The Data Team will be spearheading a first shot at improving data access and automating as much data wrangling as possible. Along the way, we will also count on other teams (engineering, revops, product, etc) assistance to improve data management, business processes, and other elements that currently create challenges for invoicing.
+
+We will get in touch with the related stakeholders soon, and we will update you through the advances here.
+
+# 2023-12-22
+
+Merry Christmas to everyone!
+
+](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a/Untitled%2040.png)
+
+[https://xkcd.com/1933/](https://xkcd.com/1933/)
+
+## Closing in on finishing the first cataloguing batch
+
+After a nice week working on more details about the Core database and Hubspot, the team also had time to pay attention to other systems and data products within the company. Thanks to this, we are very close to finishing our first version of the [Data Catalogue](https://www.notion.so/Data-Catalogue-78d91434aa1442cbb6cc13b73c7fb664?pvs=21).
+
+If you are involved in any of the [Data Sources](https://www.notion.so/Data-Sources-739e7af77fd0407ca51f2a1c33e2c526?pvs=21) and [Data Products](https://www.notion.so/Data-Products-5030f44a0f764adebb1443ea0681f68a?pvs=21) contained in it… expect news from us soon. We will need your help to double check the contents and ensure it’s all top notch material.
+
+And for everyone else: we will make a nice, big announcement once the Catalogue is complete and ready for you to enjoy. Stay tuned!
+
+## Handing over of the PBI reports on top of Core
+
+After some sessions together with the Engineering Team to perform the handover, we can finally announce that the Data Team is now ready to take care of the existing [Power BI reports](https://www.notion.so/Superhog-Reporting-Production-Suite-6da7d7a2c37a43bc9b82802670e46b97?pvs=21) sitting on top of the [Dashboard database (Core)](https://www.notion.so/Superhog-Core-Database-70786af3075e46d4a4e3ce303eb9ef00?pvs=21).
+
+You can contact us to:
+
+- Request access
+- Notify issues on data
+- Request changes on the dashboard contents
+- Clarify doubts on the dashboard contents
+- And anything else you might need in relations to these reports.
+
+A small spoiler: as part of working on an improved architecture for our data systems, we expect to sunset these reports somewhere in the upcoming weeks and provide similar ones on top of our future DWH afterwards. So, be aware that right now we will only address truly urgent hotfixes on the existing reports. Any new cool stuff, we will built directly on top of our new DWH.
+
+## First architecture tests and discussions in progress
+
+
+
+The Data Team, busy at the engineering lab trying out weird ideas.
+
+The work on architecture has moved forward [since we last updated you](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md). After some discussions with our Engineering Team to align and design [a first draft of something that works for everyone](https://guardhog.atlassian.net/wiki/spaces/Data/pages/159023188/Data+Infra+Architecture), we have already started running our first tests.
+
+This involves designing and implementing different pieces of software in a development environment so we can validate that all moving parts work together as intended before we jump on to deploying the real stuff on our cloud. It’s a time of big headaches and lots of open questions, but also of excitement because all of this will be *insanely* useful and productive once we set it up properly.
+
+
+
+We deal with this crazy stuff so you don’t have to.
+
+We will need more time and tests to get this ready, but we are on the right path to provide great foundations for all of us to enjoy data here at Superhog. We will keep you posted as we get closer to an internal launch.
+
+# 2023-12-15
+
+## Successful kickoff of the Data Team
+
+[As we told you about last week](Data%20News%20-%20From%201st%20Dec%202023%20to%207th%20Feb%202025%2019d0446ff9c9803983f5db69fb38e82a.md), this week we had our big meeting to present our plans to some of the management in order to align on the future of our team’s role. The meeting went great: we left the room with a common vision on what the data team will be taking care of, clarity on the most important priorities and a rough plan and what comes next.
+
+If you are curious, you can check the slides we went through here: [20231214 Data Team Starting Presentation - Shared.pptx](https://guardhog.sharepoint.com/:p:/s/DataTeam/EVT1b3XYR6NJqur8QnWTsywBxbj9LeigursF4JiVmxUHpg?e=9Fo8Y7)
+
+
+
+The Data Team busy at room M3.01 convincing colleagues on how to move forward.
+
+## Documentation keeps on growing
+
+This week we kept working on documenting important systems within Superhog. Our focus at this point is both on the Dashboard backend database (which we have unilaterally baptised as Core, for clarity’s sake) and Hubspot. We’ve chosen these two because:
+
+- They are our largest, most used systems.
+- Some reporting already exists on top of them which we need to maintain.
+- Integrating certain bits of data across the two and making that accessible would be a great win for Superhog, since a lot of people regularly need insights that draw from both.
+
+The documentation is still very much a WIP, but you can feel free to take a peek both here [in Notion](https://www.notion.so/Data-Sources-739e7af77fd0407ca51f2a1c33e2c526?pvs=21) and [in Confluence](https://guardhog.atlassian.net/wiki/spaces/Data/overview?homepageId=152731908).
+
+## Drawing first bits of our future architecture
+
+Making data clean, ready and accessible to everyone in Superhog will require us to build a few things that are lacking today. Some of you are probably already familiar with ideas like [Datawarehouses](https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-a-data-warehouse/), [ETL](https://learn.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl) and [ELT](https://learn.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl#extract-load-and-transform-elt) pipelines, [orchestration engines](https://blog.devgenius.io/modern-data-orchestration-stack-with-prefect-2-0-airbyte-and-dbt-e7c0e9b27add), etc. All these bits of infrastructure that we are currently missing will lead to our Data Architecture: basically, a fancy name for a bunch of computers and software running on it that will:
+
+- Store data (and data about the data, which is called [Metadata](https://lakefs.io/blog/metadata-guide-for-data-engineers/))
+- Clean, process and move data
+- Make it available to you through reports, dashboards, etc.
+
+Building this infrastructure will allow us to do all of this in a secure and efficient way. Without it, building a report is an Herculean task (which probably some of you are familiar with, since you are having to do it regularly *precisely* because these architecture is not yet in place. We will save you soon™).
+
+We are in an early stage, so the work at this point is working together with the Engineering Team to design a Data Architecture that makes sense and will reasonably cover our needs during 2024. Once we have a plan… we will go and get dirty setting up the metal.
+
+# 2023-12-07
+
+## End of the first round of contacts
+
+After a very intense first two weeks, we have met with most stakeholders in the company to better understand the current situation around data, systems, processes, etc.
+
+We still have plenty to do, and we will most surely sit down and talk *many* more times soon. If we haven’t been in touch with you already, and you want to discuss any need or requirement around data, please, take the lead and [get in touch with us](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21).
+
+## Preparing proposal
+
+The next week we will sit down with the TMT team to present our vision and plans on how should the Data Team and its work look like. We hope to come out of it with a shared vision and some plans that will trickle down throughout other teams. We will share the presentation in our next update.
+
+## Start of the Data Catalogue
+
+The brand new [Superhog Data Catalogue](https://www.notion.so/Data-Catalogue-78d91434aa1442cbb6cc13b73c7fb664?pvs=21) has been born and you can find it right here, in Notion!
+
+The Data Catalogue will act as an index to Data in Superhog. In it, we will place the full vision on the existing Data and Data Products within the company.
+
+It’s going to be a company-wide effort to keep it updated: we will probably get in touch at some point to ask for your help filling details out, and you can always [get in touch](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21) first if you want to proactively add contents to the Catalogue. You can take a look at the existing entries to get a feel for what kind of information gets included in the Catalogue.
+
+
+
+The Data Team, busy documenting the Data Catalogue.
+
+# 2023-12-01
+
+## Hello World
+
+The Data Team just started out! The first member of the team, Pablo, joined Superhog last Monday.
+
+There is a lot to untangle, discover, plan and execute, so it will take a bit until interesting stuff begins to appear here.
+
+If you have suggestions/concerns/ideas/proposals/etc around data in Superhog and we didn’t get a chance to sit and talk yet, feel free to [get in touch](https://www.notion.so/Data-Homepage-0ac0a2e52a8940c7ba4f31e5ffcc33e8?pvs=21).
+
+Have a nice weekend!
+
+
\ No newline at end of file
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg
new file mode 100644
index 0000000..3c44567
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/6ao96u9d8wta1.jpg:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx
new file mode 100644
index 0000000..9e8a489
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Analysis_billing_vs_listing_country.xlsx:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png
new file mode 100644
index 0000000..2d705ea
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2023-12-21_183539.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png
new file mode 100644
index 0000000..40158f5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-02-23_172356.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png
new file mode 100644
index 0000000..8295fa1
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Screenshot_2024-05-10_171919.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif
new file mode 100644
index 0000000..b1f5ccc
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.gif:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png
new file mode 100644
index 0000000..aef1738
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp
new file mode 100644
index 0000000..fd3a9d9
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 1.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png
new file mode 100644
index 0000000..48c60b8
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp
new file mode 100644
index 0000000..361eaea
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 10.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png
new file mode 100644
index 0000000..10627dc
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp
new file mode 100644
index 0000000..c0d2bdc
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 11.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png
new file mode 100644
index 0000000..9e4f7bd
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 12.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png
new file mode 100644
index 0000000..b851354
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 13.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png
new file mode 100644
index 0000000..7cd3b5b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 14.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png
new file mode 100644
index 0000000..da81203
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 15.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png
new file mode 100644
index 0000000..338e61e
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 16.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png
new file mode 100644
index 0000000..4d255f4
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 17.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png
new file mode 100644
index 0000000..dde142f
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 18.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png
new file mode 100644
index 0000000..81b3d8b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 19.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png
new file mode 100644
index 0000000..f98991a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp
new file mode 100644
index 0000000..dd4a085
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 2.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png
new file mode 100644
index 0000000..fa38436
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 20.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png
new file mode 100644
index 0000000..881059f
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 21.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png
new file mode 100644
index 0000000..3ac2b05
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 22.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png
new file mode 100644
index 0000000..218da69
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 23.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png
new file mode 100644
index 0000000..309ccd3
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 24.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png
new file mode 100644
index 0000000..85d7069
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 25.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png
new file mode 100644
index 0000000..edf4ea2
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 26.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png
new file mode 100644
index 0000000..63805f0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 27.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png
new file mode 100644
index 0000000..d5f4e54
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 28.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png
new file mode 100644
index 0000000..2552cca
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 29.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png
new file mode 100644
index 0000000..7787457
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp
new file mode 100644
index 0000000..17b9f06
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 3.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png
new file mode 100644
index 0000000..25a346a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 30.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png
new file mode 100644
index 0000000..38a3f8b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 31.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png
new file mode 100644
index 0000000..b489479
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 32.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png
new file mode 100644
index 0000000..dedf11b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 33.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png
new file mode 100644
index 0000000..666c860
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 34.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png
new file mode 100644
index 0000000..44373da
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 35.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png
new file mode 100644
index 0000000..3f33ff3
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 36.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png
new file mode 100644
index 0000000..7d9a1ab
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 37.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png
new file mode 100644
index 0000000..3bb4ba5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 38.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png
new file mode 100644
index 0000000..c12ddd5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 39.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png
new file mode 100644
index 0000000..fddc123
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp
new file mode 100644
index 0000000..8cd66a0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 4.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png
new file mode 100644
index 0000000..abc7124
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 40.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png
new file mode 100644
index 0000000..915a0dd
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 41.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png
new file mode 100644
index 0000000..abc7124
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 42.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png
new file mode 100644
index 0000000..1c81460
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp
new file mode 100644
index 0000000..8f799e6
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 5.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png
new file mode 100644
index 0000000..5867350
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp
new file mode 100644
index 0000000..d4f98be
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 6.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png
new file mode 100644
index 0000000..0810757
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp
new file mode 100644
index 0000000..9c99bda
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 7.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png
new file mode 100644
index 0000000..b4dfb62
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp
new file mode 100644
index 0000000..76f807f
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 8.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png
new file mode 100644
index 0000000..b098d19
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp
new file mode 100644
index 0000000..9601e1b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled 9.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif
new file mode 100644
index 0000000..f6042ab
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.gif:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg
new file mode 100644
index 0000000..2fad023
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.jpeg:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png
new file mode 100644
index 0000000..c582182
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp
new file mode 100644
index 0000000..91911d1
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/Untitled.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg
new file mode 100644
index 0000000..f5d7467
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/V0kiXIAfrgWG6wxLSuzG--1--0jg98.jpg:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png
new file mode 100644
index 0000000..a414eaf
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/datateam.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif
new file mode 100644
index 0000000..1bd17f5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/food-chocolate.gif:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp
new file mode 100644
index 0000000..84fe4fb
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/giphy.webp:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png
new file mode 100644
index 0000000..2465753
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 1.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png
new file mode 100644
index 0000000..e82cec8
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 10.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png
new file mode 100644
index 0000000..7f11446
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 11.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png
new file mode 100644
index 0000000..2c5a881
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 12.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png
new file mode 100644
index 0000000..571af40
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 13.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png
new file mode 100644
index 0000000..183462e
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 14.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png
new file mode 100644
index 0000000..c143717
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 15.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png
new file mode 100644
index 0000000..5e19b6c
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 16.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png
new file mode 100644
index 0000000..65145b9
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 17.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png
new file mode 100644
index 0000000..760d363
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 18.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png
new file mode 100644
index 0000000..86a31e0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 19.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png
new file mode 100644
index 0000000..d6dc8d9
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 2.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png
new file mode 100644
index 0000000..10382a6
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 20.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png
new file mode 100644
index 0000000..f63d3c4
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 21.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png
new file mode 100644
index 0000000..04dacf3
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 22.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png
new file mode 100644
index 0000000..7a5cb38
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 23.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png
new file mode 100644
index 0000000..657373d
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 24.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png
new file mode 100644
index 0000000..2e3a3ff
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 25.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png
new file mode 100644
index 0000000..2c66fdd
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 26.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png
new file mode 100644
index 0000000..2ba6898
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 27.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png
new file mode 100644
index 0000000..a2d2c7d
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 28.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png
new file mode 100644
index 0000000..e1b60ec
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 29.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png
new file mode 100644
index 0000000..34f87ed
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 3.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png
new file mode 100644
index 0000000..595854e
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 30.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png
new file mode 100644
index 0000000..116c2f4
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 31.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png
new file mode 100644
index 0000000..b0cefb8
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 32.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png
new file mode 100644
index 0000000..43e4098
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 33.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png
new file mode 100644
index 0000000..14b8342
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 34.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png
new file mode 100644
index 0000000..8c7ac4b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 35.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png
new file mode 100644
index 0000000..38133e7
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 36.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png
new file mode 100644
index 0000000..8529486
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 37.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png
new file mode 100644
index 0000000..c978b93
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 38.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png
new file mode 100644
index 0000000..4bdd935
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 39.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png
new file mode 100644
index 0000000..6d0585b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 4.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png
new file mode 100644
index 0000000..fd103c9
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 40.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png
new file mode 100644
index 0000000..a612bd5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 41.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png
new file mode 100644
index 0000000..b5df657
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 42.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png
new file mode 100644
index 0000000..4c235b0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 43.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png
new file mode 100644
index 0000000..be0657c
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 44.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png
new file mode 100644
index 0000000..f3691c8
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 45.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png
new file mode 100644
index 0000000..c90cc7b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 46.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png
new file mode 100644
index 0000000..6340ec5
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 47.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png
new file mode 100644
index 0000000..05c93ef
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 48.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png
new file mode 100644
index 0000000..513a9c7
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 49.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png
new file mode 100644
index 0000000..7db1fcc
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 5.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png
new file mode 100644
index 0000000..8c06eb3
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 50.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png
new file mode 100644
index 0000000..0a39e63
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 6.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png
new file mode 100644
index 0000000..741db19
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 7.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png
new file mode 100644
index 0000000..1d7aec9
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 8.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png
new file mode 100644
index 0000000..fad89e2
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image 9.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png
new file mode 100644
index 0000000..3d6f67e
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png
new file mode 100644
index 0000000..ad011c7
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/image_(5).png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif
new file mode 100644
index 0000000..15633bf
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/laptop-smoking-smoking-laptop.gif:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif
new file mode 100644
index 0000000..8078dbd
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/Data News - From 1st Dec 2023 to 7th Feb 2025 19d0446ff9c9803983f5db69fb38e82a/yess-yes.gif:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg
new file mode 100644
index 0000000..5f634ae
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/continuousintegrationdronepr-build-is.jpg:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png
new file mode 100644
index 0000000..22853a8
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 1.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png
new file mode 100644
index 0000000..548c874
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 10.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png
new file mode 100644
index 0000000..1a23d04
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 11.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png
new file mode 100644
index 0000000..3092985
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 12.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png
new file mode 100644
index 0000000..78ea99a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 13.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png
new file mode 100644
index 0000000..4abf058
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 14.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png
new file mode 100644
index 0000000..84ad456
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 15.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png
new file mode 100644
index 0000000..cd33f36
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 16.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png
new file mode 100644
index 0000000..2a52dbb
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 17.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png
new file mode 100644
index 0000000..3e7f0c4
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 18.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png
new file mode 100644
index 0000000..02167f0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 19.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png
new file mode 100644
index 0000000..ca80915
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 2.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png
new file mode 100644
index 0000000..76e46bc
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 20.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png
new file mode 100644
index 0000000..ba66abe
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 21.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png
new file mode 100644
index 0000000..371348f
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 22.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png
new file mode 100644
index 0000000..36e8e74
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 23.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png
new file mode 100644
index 0000000..20acbd2
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 24.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png
new file mode 100644
index 0000000..6c9770a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 25.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png
new file mode 100644
index 0000000..1b8cf1c
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 26.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png
new file mode 100644
index 0000000..1f94e39
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 27.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png
new file mode 100644
index 0000000..87fd1a0
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 28.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png
new file mode 100644
index 0000000..b9d73ec
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 29.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png
new file mode 100644
index 0000000..8586d1a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 3.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png
new file mode 100644
index 0000000..884d2bd
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 30.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png
new file mode 100644
index 0000000..ec8f8de
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 31.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png
new file mode 100644
index 0000000..f4bf131
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 32.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png
new file mode 100644
index 0000000..6ee9d1b
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 33.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png
new file mode 100644
index 0000000..29f3e4d
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 34.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png
new file mode 100644
index 0000000..18e9d86
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 35.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png
new file mode 100644
index 0000000..69d95e3
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 36.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png
new file mode 100644
index 0000000..7ef8044
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 37.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png
new file mode 100644
index 0000000..0956970
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 4.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png
new file mode 100644
index 0000000..cd7a94a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 5.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png
new file mode 100644
index 0000000..3343763
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 6.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png
new file mode 100644
index 0000000..58e728c
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 7.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png
new file mode 100644
index 0000000..8d7292d
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 8.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png
new file mode 100644
index 0000000..977e82d
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image 9.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png
new file mode 100644
index 0000000..d41e42a
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image.png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png
new file mode 100644
index 0000000..54de401
Binary files /dev/null and b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png differ
diff --git a/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png:Zone.Identifier b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png:Zone.Identifier
new file mode 100644
index 0000000..409c718
--- /dev/null
+++ b/notion_data_news/Private & Shared/Data News 7dc6ee1465974e17b0898b41a353b461/image_(9).png:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_news.zip
diff --git a/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md
new file mode 100644
index 0000000..8e035ce
--- /dev/null
+++ b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md
@@ -0,0 +1,116 @@
+# (Legacy) Technical Documentation - 2024-08-05
+
+This documentation follows a top-down approach. We start with what is visible to the users through PBI and we go backwards to the details of how things are structured and computed within DWH. This way we keep the overall image of the project before jumping into the details of it.
+
+**Table of contents**
+
+# Power BI Reporting
+
+## Overview
+
+We have a single report for Business KPIs at this stage. It’s Main KPIs and it’s published in Business Overview. [Link to the repository here](https://guardhog.visualstudio.com/Data/_git/data-pbi-reports?path=/reports/business_overview_main_kpi).
+
+The reporting contains 2 manners of seeing KPIs: Global KPIs and KPIs by Deal. The mapping of the KPIs per report page is the following:
+
+- Global: MTD, Monthly Overview, Evolution over Time
+- by Deal: Detail by Deal, Deal Comparison
+
+Additionally, the reporting contains a Readme page with detailed explanation of each tab. Lastly, the report contains a Data Glossary that specifies how metrics are computed and if there’s any data quality issue around some metrics.
+
+## Data Sources
+
+Since there’s 2 ways of visualising KPIs, Global and by Deal, this report contains 2 sources. These are, in Reporting:
+
+- Global: `mtd_aggregated_metrics`
+- by Deal: `monthly_aggregated_metrics_history_by_deal`
+
+
+
+Note the convention that follows. Both contain the `aggregated_metrics`, meaning at this stage metrics from different sources are aggregated within these 2 models. The main differences between these 2 are the fact that the KPIs by Deal are stated to be considered at `monthly_history_by_deal` level, while Global KPIs are `mtd` (month to date). This is on purpose and has consequences on how the KPIs are computed.
+
+Let’s take a look at how these models look like:
+
+For Global KPIs, `mtd_aggregated_metrics`:
+
+
+
+**For each date and each metric**, we have the `value`, `previous year value` and the `relative increment` between value and previous year value. Some other fields that are important are the number format, that will impact on how the metric is formatted within Power BI and order by, that will impact on how it is ordered within the visualisation of the KPIs, specially in the MTD tab. Lastly, the dates that are displayed are either the last day of historical months OR any day of the current month, for MTD purposes.
+
+For KPIs by deal, `monthly_aggregated_metrics_history_by_deal`
+
+
+
+**For each date and each id_deal**, we have only the **values of each metric in separated columns**. Note that this is not aggregated at metric level as the MTD part, and there’s also not any previous year value or relative increment. This impacts on how the intermediate aggregations are handled.
+
+# Global vs. By Deal KPIs computation
+
+## Global KPIs schema
+
+
+
+## KPIs by Deal schema
+
+
+
+Here’s the main goals of each stage, similarities and differences to be taken into account:
+
+- **Reporting**:
+ - **Goal**: materialise and expose the data that is going to be available for users.
+ - **Similarities**
+ - Both flows have a table in reporting that exposes the information for PBI usage.
+ - **Differences**
+ - The by Deal part is a replica of what is available in intermediate. However, for Global is not exactly the case, since in `mtd_aggregated_metrics` we force the exclusion of Xero-based metrics for the current month and the previous one. This is to 1) avoid displaying partial invoicing data thus affecting figures such as revenue while 2) ensure within DWH all data is up-to-date, even if the invoicing cycle has not finalised. You can find the exclusion condition [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/mtd_aggregated_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=22&lineEnd=23&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ - The naming convention, as explained before, because of how KPIs are computed and how the information is displayed in these 2 models (see Data Sources of previous paragraph)
+- **Aggregation**:
+ - **Goal**: aggregates different sources of metrics data into a single model before exposing it.
+ - **Similarities**
+ - Both flows have a previous step in intermediate, before reporting, that contains the final computation of KPIs, namely `int_mtd_aggregated_metrics` and `monthly_aggregated_metrics_by_deal`.
+ - **Differences**
+ - The Global KPIs have two steps:
+ - `int_mtd_vs_previous_year_metrics`: ensures the [plain combination of the sources + the computation of derives metrics](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=27&lineEnd=28&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents) AND [the computation vs. previous year by auto-joining the combined CTE](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=187&lineEnd=188&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ - `int_mtd_aggregated_metrics`: ensures the unpivot display i.e., all different metrics are aggregated into a metrics column. [Here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_aggregated_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=1&lineEnd=2&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents) we also specify the fields of the number format, order by and which name tag (metric) corresponds to each value, previous year value and relative increment.
+ - The KPIs by Deal have just one step:
+ - `int_monthly_aggregated_metrics_history_by_deal` only handles the [plain combination of the soruces + the computation of derived metrics](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_aggregated_metrics_history_by_deal.sql) on the By Deal basis.
+- **Sources**:
+ - **Goal**: Handle all specific logic for retrieving each metric from intermediate master tables.
+ - **Similarities**
+ - All metrics depending on the same sources are encapsulated within each source model.
+ - All follow a strategy of logic computation within each CTE ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__mtd_guest_journey_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=26&lineEnd=27&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__mtd_guest_payments_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=17&lineEnd=18&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)) with a final aggregation of a date model with left join on the different CTEs ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__monthly_guest_payments_history_by_deal.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=80&lineEnd=81&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)). See links for some example.
+ - **Differences**:
+ - Global models need to force a join with `int_dates_mtd` in each CTE to allow for the aggregation of the metric up to a certain day in the past, for MTD purposes. This is highly consuming in resources, thus since it’s not needed in the By Deal models, you don’t actually need to join with the `int_dates_by_deal` in the CTEs, but only in the final aggregation.
+ - By Deal models need to have a Deal. This means that sometimes, since Deal is not available in a source model (ex: in Guest Journeys - verification_requests table there’s no deal), there’s additional joins to retrieve the id deal. This is not needed for Global models thus simplifying the computation.
+- **Dates**:
+ - **Goal**: Provide an empty date framework that serves as the skeleton of the needed dates/granularity for each KPI type.
+ - **Similarities**:
+ - Each KPI visualisation type, Global and by Deal, have a unique dependency on a Date model.
+ - **Differences**:
+ - The `int_dates_mtd` only contains dates and allows for the MTD aggregation ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_dates_mtd.sql)) while the `int_dates_by_deal` contains the Deal aggregation - by deal suffix - while does not allow for the MTD aggregation - does not contain a mtd prefix ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_dates_by_deal.sql)).
+
+# How to create a new metric?
+
+Follow these steps:
+
+1. Identify if the metric is Global, by Deal or both. Likely it’s both, except if you’re doing some Deal-based metric by Deal that might not make sense. This will clarify if you need to modify 1 of the branches or both of them.
+2. Identify the source of your metric. From here we can have different possibilities:
+ 1. If for instance, the metric is related to Bookings, you might want to add it in the `int_core__mtd_booking_metrics` and `int_core__monthly_bookings_history_by_deal`. Similar rationality can apply for Guest Journeys, Invoicing, Guest Payments, Listings, etc.
+ 2. If the metric “type” does not exist yet, such as implementing a Hubspot-based client onboarding opportunities metrics, ideally you’d create a standalone model by replicating the structure of an already existing source model. Copy-paste and adapt 🙂
+ 3. If your metric is a combination of two or more different sources, such as Total Revenue by Booking Cancelled, you will need to understand if the submetrics are already available or not. If yes, you can skip this part, if not, go to point a) or b). If it’s a derived metrics within the same source, such as Guest Journey with Payment per Guest Journey Created, you can directly add it in `int_core__mtd_guest_journey_metrics` and `int_core__monthly_guest_journey_history_by_deal`.
+3. Propagate to intermediate aggregations. Let’s split Global and Deal based:
+ 1. Global KPIs:
+ 1. Reference your newly created metric in the plain combination of sources in the `int_mtd_vs_previous_year_metrics`. If you need to do a combination with multiple metrics from different sources, this is the place to go. Keep in mind to apply similar `nullif(coalesce(x,0)+colaesce(y,0),0)` structures for combined metrics to ensure that metrics get combined if there’s null but there’s no division by zero error at the final aggregation 🙂. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=110&lineEnd=111&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ 2. Use the macro `calculate_safe_relative_increment` to compute the value, previous_year_value and relative_increment in the final query ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=187&lineEnd=188&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)).
+ 2. KPIs by Deal:
+ 1. Reference your newly created metric in the plain combination of sources in the `int_monthly_aggregated_metrics_history_by_deal`. If you need to do a combination with multiple metrics from different sources, this is the place to go. Keep in mind to apply similar `nullif(coalesce(x,0)+colaesce(y,0),0)` structures for combined metrics to ensure that metrics get combined if there’s null but there’s no division by zero error at the final aggregation 🙂. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_aggregated_metrics_history_by_deal.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=95&lineEnd=96&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+4. Exposure of metrics. Let’s split Global and Deal based:
+ 1. Global KPIs:
+ 1. Add the configuration of your new metric in `int_mtd_aggregated_metrics`. You’ll need to parametrise the order, metric (name tag that will be displayed in the reporting), the number format (for formatting in the reporting) and which values is going to use. Order by is informative so you can actually replicate an existing one, although I recommend to choose a value not being used so it’s clearer how we want to order the KPIs. **Important: keep in mind that merging and refreshing this will directly make this metric available and visible in the dashboard.**
+ 2. If your metric is or uses an invoicing metric that should not be displayed in the current month or the previous month, validate that the [condition applied](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/mtd_aggregated_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=38&lineEnd=39&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents) in the reporting file of `mtd_aggregated_metrics` works well.
+ 3. Modify Data Glossary to include the description of your new metric. Note that there’s no additional need to change anything else on the Power BI for Global metrics.
+ 2. Deal KPIs:
+ 1. Propagate the new metric from `int_monthly_aggregated_metrics_history_by_deal` to `monthly_aggregated_metrics_history_by_deal`. If this metric is or uses an invoicing metric, please use the macro `is_date_before_previous_month`. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/monthly_aggregated_metrics_history_by_deal.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=31&lineEnd=32&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ 2. In Power BI, once the model in reporting has been refreshed, you will need to manually add the new metrics in the tabs: Detail by Deal and Deal Comparison. For each new metric, in PBI, you will need to manually specify the number format, the order of display and the name of the metric.
+
+# Additional notes
+
+1. You’ve seen that the two ways of displaying data at this stage are not consistent - beyond the fact of having the granularity of Deal or not. It has some pros and cons and this changes the way of how to create a new metric. Global is much more DWH dependant, while By Deal needs more PBI modifications.
+2. At this stage, we want to implement metrics by different dimensions, and this is actually complicated to generalise within the current setup. We’re investigating a more scalable solution called MetricFlow that could potentially modify completely this structured that has been presented in this Notion page.
\ No newline at end of file
diff --git a/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md:Zone.Identifier b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-08-05 aa7e1cf16b6e410b86ee0787a195be48.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md
new file mode 100644
index 0000000..11b3cc9
--- /dev/null
+++ b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md
@@ -0,0 +1,156 @@
+# (Legacy) Technical Documentation - 2024-09-20
+
+This documentation follows a top-down approach. We start with what is visible to the users through PBI and we go backwards to the details of how things are structured and computed within DWH. This way we keep the overall image of the project before jumping into the details of it.
+
+**Table of contents**
+
+# Power BI Reporting
+
+## Overview
+
+We have a single report for Business KPIs at this stage. It’s Main KPIs and it’s published in Business Overview. [Link to the repository here](https://guardhog.visualstudio.com/Data/_git/data-pbi-reports?path=/reports/business_overview_main_kpi).
+
+The reporting contains 2 manners of seeing KPIs: **Global KPIs** and **KPIs by Deal**. The mapping of the KPIs per report page is the following:
+
+- **Global**: MTD, Monthly Overview, Global Evolution over Time, Detail by Category
+- **by Deal**: Detail by Deal, Deal Comparison
+
+Additionally, the reporting contains a Readme page with detailed explanation of each tab. Lastly, the report contains a Data Glossary that specifies how metrics are computed and if there’s any data quality issue around some metrics.
+
+You will notice that **Global KPIs includes Categories**. These are effectively dimensions from which we slice the data. Even though a “detail by deal” could be considered as another dimension, it’s considered as a separate entity since the computation is independent from the Global KPIs.
+
+At the moment of writing this page, the list of categories are:
+
+- Global
+- By # of Listings Booked in 12 Months
+- By Billing Country
+
+## Data Sources
+
+Since there’s 2 ways of visualising KPIs, Global and by Deal, this report contains 2 sources. These are, in Reporting:
+
+- Global: `mtd_aggregated_metrics`
+- by Deal: `monthly_aggregated_metrics_history_by_deal`
+
+
+
+Note the convention that follows. Both contain the `aggregated_metrics`, meaning at this stage metrics from different sources are aggregated within these 2 models. The main differences between these 2 are the fact that the KPIs by Deal are stated to be considered at `monthly_history_by_deal` level, while Global KPIs are `mtd` (month to date). This is on purpose and has consequences on how the KPIs are computed.
+
+Let’s take a look at how these models look like:
+
+For Global KPIs, `mtd_aggregated_metrics`:
+
+
+
+**For each date, dimension, dimension_value and each metric**, we have the `value`, `previous year value` and the `relative increment` between value and previous year value. Some other fields that are important are the `number format`, that will impact on how the metric is formatted within Power BI and `order by`, that will impact on how it is ordered within the visualisation of the KPIs, specially in the MTD tab. You will also see that we have a `relative increment with sign format` that is used to apply the red to white to green conditional formatting in PBI. Lastly, the dates that are displayed are either the last day of historical months OR any day of the current month, for MTD purposes.
+
+For KPIs by deal, `monthly_aggregated_metrics_history_by_deal`
+
+
+
+**For each date and each id_deal**, we have only the **values of each metric in separated columns**. Additionally, we have a few deal attributes or informative fields, such as the `main deal name`, `main billing country` and the `deal lifecycle state` on that month. Note that this is not aggregated at metric level as the MTD part, and there’s also not any previous year value or relative increment. This impacts on how the intermediate aggregations are handled.
+
+# Global vs. By Deal KPIs computation
+
+Below you will find a simplified schema documentation. It does not include all dependencies, since it’s massive 🙂
+
+It just focuses on 4 areas, from left (down) to right (top):
+
+- Date models, which act as empty skeletons
+- Source models, where all the complex logic of metric & dimension computation happens
+- Aggregation models, which mainly aggregate the different source models into a unified model, with additional computations
+- Reporting models, which are used to expose the data to Power BI. From these, it’s possible that dedicated data tests are present to ensure certain levels of data quality.
+
+## Global KPIs schema
+
+
+
+## KPIs by Deal schema
+
+
+
+Here’s the main goals of each stage, similarities and differences to be taken into account:
+
+- **Reporting**:
+ - **Goal**: materialise and expose the data that is going to be available for users.
+ - **Similarities**
+ - Both flows have a table in reporting that exposes the information for PBI usage.
+ - **Differences**
+ - The by Deal part is a replica of what is available in intermediate. However, for Global is not exactly the case, since in `mtd_aggregated_metrics` we force the exclusion of Xero-based metrics for the current month and the previous one. This is to 1) avoid displaying partial invoicing data thus affecting figures such as revenue while 2) ensure within DWH all data is up-to-date, even if the invoicing cycle has not finalised. You can find the exclusion condition [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/mtd_aggregated_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=22&lineEnd=23&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ - The naming convention, as explained before, because of how KPIs are computed and how the information is displayed in these 2 models (see Data Sources of previous paragraph)
+ - Additionally, two data tests depend on `mtd_aggregated_metrics`. These ensure 1) certain consistency between metric aggregation on all category values for any category distinct to global, with respect of global and 2) that the latest values of the day do not differ excessively from what was observed in previous days; this is, detecting outliers
+- **Aggregation**:
+ - **Goal**: aggregates different sources of metrics data into a single model before exposing it.
+ - **Similarities**
+ - Both flows have a previous step in intermediate, before reporting, that contains the final computation of KPIs, namely `int_mtd_aggregated_metrics` and `monthly_aggregated_metrics_by_deal`.
+ - **Differences**
+ - The Global KPIs have two steps:
+ - `int_mtd_vs_previous_year_metrics`: ensures the [plain combination of the sources + the computation of derives metrics](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=%2Fmodels%2Fintermediate%2Fcross%2Fint_mtd_vs_previous_year_metrics.sql&version=GBmaster&_a=contents) AND [the computation vs. previous year by auto-joining the combined CTE](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmaster&line=235&lineEnd=236&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ - `int_mtd_aggregated_metrics`: ensures the unpivot display i.e., all different metrics are aggregated into a metrics column. [Here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_aggregated_metrics.sql&version=GBmaster&line=1&lineEnd=2&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents) we also specify the fields of the number format, order by and which name tag (metric) corresponds to each value, previous year value and relative increment.
+ - The KPIs by Deal have just one step:
+ - `int_monthly_aggregated_metrics_history_by_deal` only handles the [plain combination of the sources + the computation of derived metrics](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_aggregated_metrics_history_by_deal.sql) on the By Deal basis.
+- **Sources**:
+ - **Goal**: Handle all specific logic for retrieving each metric from intermediate master tables.
+ - **Similarities**
+ - All metrics depending on the same sources are encapsulated within each source model.
+ - All follow a strategy of logic computation within each CTE ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__mtd_guest_payments_metrics.sql&version=GBmaster&line=29&lineEnd=30&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)) with a final aggregation of a date model with left join on the different CTEs ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__monthly_guest_payments_history_by_deal.sql&version=GBmaster&line=80&lineEnd=81&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)). See links for some example.
+ - **Differences**:
+ - Global models have jinja code that loop across the set of categories specified in the macro `get_kpi_dimensions` in [business_kpis_configuration](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/macros/business_kpis_configuration.sql). For each dimension or category, different joins and where conditions can apply. In contrast, By Deal models are far more simple since the id_deal is the only granularity.
+ - Global models need to force a join with `int_dates_mtd` in each CTE to allow for the aggregation of the metric up to a certain day in the past, for MTD purposes. This is highly consuming in resources, thus since it’s not needed in the By Deal models, you don’t actually need to join with the `int_dates_by_deal` in the CTEs, but only in the final aggregation.
+ - By Deal models need to have a Deal. This means that sometimes, since Deal is not available in a source model (ex: in Guest Journeys - verification_requests table there’s no deal), there’s additional joins to retrieve the id deal. This is not needed for some categories on Global models, thus logic might differ.
+ - Booking metrics are split between 4 different models in the Global view, while it’s just one model in the By Deal view. This is because of an exercise of performance optimisation - yes, categorising is expensive
+- **Dates**:
+ - **Goal**: Provide an empty date framework that serves as the skeleton of the needed dates/granularity for each KPI type.
+ - **Similarities**:
+ - Each KPI visualisation type, Global and by Deal, have a unique dependency on a Date model.
+ - **Differences**:
+ - The `int_dates_mtd_by_category` contains dates, category and category value and allows for the MTD aggregation ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_dates_mtd_by_dimension.sql)) while the `int_dates_by_deal` contains the Deal aggregation - by deal suffix - while does not allow for the MTD aggregation - does not contain a mtd prefix ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_dates_by_deal.sql)).
+
+# How to create a new metric?
+
+Follow these steps:
+
+1. Identify if the metric is Global, by Deal or both. Likely it’s both, except if you’re doing some Deal-based metric by Deal that might not make sense. This will clarify if you need to modify 1 of the branches or both of them.
+2. Identify the source of your metric. From here we can have different possibilities:
+ 1. If for instance, the metric is related to the Guest Journey, you might want to add it in the `int_core__mtd_guest_journey_metrics` and `int_core__monthly_guest_journey_history_by_deal`. Similar rationality can apply for Bookings, Invoicing, Guest Payments, Listings, etc.
+ 2. If the metric “type” does not exist yet, such as implementing a Hubspot-based client onboarding opportunities metrics, ideally you’d create a standalone model by replicating the structure of an already existing source model. Copy-paste and adapt 🙂
+ 3. If your metric is a combination of two or more different sources, such as Total Revenue by Booking Cancelled, you will need to understand if the submetrics are already available or not. If yes, you can skip this part, if not, go to point a) or b). If it’s a derived metrics within the same source, such as Guest Journey with Payment per Guest Journey Created, you can directly add it in `int_core__mtd_guest_journey_metrics` and `int_core__monthly_guest_journey_history_by_deal`.
+3. Propagate to intermediate aggregations. Let’s split Global and Deal based:
+ 1. Global KPIs:
+ 1. Reference your newly created metric in the plain combination of sources in the `int_mtd_vs_previous_year_metrics`. If you need to do a combination with multiple metrics from different sources, this is the place to go. Keep in mind to apply similar `nullif(coalesce(x,0)+colaesce(y,0),0)` structures for combined metrics to ensure that metrics get combined if there’s null but there’s no division by zero error at the final aggregation 🙂. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=110&lineEnd=111&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ 2. Use the macro `calculate_safe_relative_increment` to compute the value, previous_year_value and relative_increment in the final query ([here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_mtd_vs_previous_year_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=187&lineEnd=188&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)).
+ 2. KPIs by Deal:
+ 1. Reference your newly created metric in the plain combination of sources in the `int_monthly_aggregated_metrics_history_by_deal`. If you need to do a combination with multiple metrics from different sources, this is the place to go. Keep in mind to apply similar `nullif(coalesce(x,0)+colaesce(y,0),0)` structures for combined metrics to ensure that metrics get combined if there’s null but there’s no division by zero error at the final aggregation 🙂. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_aggregated_metrics_history_by_deal.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=95&lineEnd=96&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+4. Exposure of metrics. Let’s split Global and Deal based:
+ 1. Global KPIs:
+ 1. Add the configuration of your new metric in `int_mtd_aggregated_metrics`. You’ll need to parametrise the order, metric (name tag that will be displayed in the reporting), the number format (for formatting in the reporting) and which values is going to use. Order by is informative so you can actually replicate an existing one, although I recommend to choose a value not being used so it’s clearer how we want to order the KPIs. **Important: keep in mind that merging and refreshing this will directly make this metric available and visible in the dashboard.**
+ 2. If your metric is or uses an invoicing metric that should not be displayed in the current month or the previous month, validate that the [condition applied](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/mtd_aggregated_metrics.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=38&lineEnd=39&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents) in the reporting file of `mtd_aggregated_metrics` works well.
+ 3. Modify Data Glossary to include the description of your new metric. Note that there’s no additional need to change anything else on the Power BI for Global metrics.
+ 2. Deal KPIs:
+ 1. Propagate the new metric from `int_monthly_aggregated_metrics_history_by_deal` to `monthly_aggregated_metrics_history_by_deal`. If this metric is or uses an invoicing metric, please use the macro `is_date_before_previous_month`. Example [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/reporting/general/monthly_aggregated_metrics_history_by_deal.sql&version=GBmodels/19382_dbt_metricflow_exploration&line=31&lineEnd=32&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+ 2. In Power BI, once the model in reporting has been refreshed, you will need to manually add the new metrics in the tabs: Detail by Deal and Deal Comparison. For each new metric, in PBI, you will need to manually specify the number format, the order of display and the name of the metric.
+
+# Additional notes
+
+1. You’ve seen that the two ways of displaying data at this stage are not consistent - beyond the fact of having the granularity of Deal or not. It has some pros and cons and this changes the way of how to create a new metric. Global is much more DWH dependant, while By Deal needs more PBI modifications.
+2. At this stage, with the capacity to compute metrics at different dimensions, we’re starting to see some performance issues. This could highly increase the more dimensions we add. The increase on number of metrics could also affect, but on a much lower rate. This could open up for refactoring such as:
+ 1. Daily based pre-aggregated semantic models at the deepest granularity, incrementally updated. This could look like:
+ 1. Time
+ 1. Date
+ 2. Dimension (with some potential examples)
+ 1. Deal ID
+ 2. Billing Country
+ 3. Listing Country
+ 4. Booking Source
+ 5. Customer Segmentation
+ 6. etc.
+ 3. Metric (daily)
+ 1. Created Bookings
+ 2. Checkout Bookings
+ 3. etc.
+
+ in combination with,
+
+ 2. A fully refreshed upper layer that aggregates the different metrics by looping per dimension, that applies MTD computation and handles converted metrics and other nuances
+
+ Likely this set up is more prone to scalability, besides the fact that could integrate the current Global and By Deal views into just a single computation
\ No newline at end of file
diff --git a/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md:Zone.Identifier b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/(Legacy) Technical Documentation - 2024-09-20 1070446ff9c980a4a850f159d4f55f8b.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md b/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md
new file mode 100644
index 0000000..e6c889e
--- /dev/null
+++ b/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md
@@ -0,0 +1,106 @@
+# 2024-07-26 - Glad you’re back, Pablo
+
+Things that happened when you were off that might require your attention
+
+# Xexe incident on July 18th
+
+All details here: [20240718-01 - Xe.com data not retrieved](20240718-01%20-%20Xe%20com%20data%20not%20retrieved%205c283e9aa4834323b38af0bff95477a5.md)
+
+# Revenue figures issues
+
+Revenue figures are not fully consistent between Data (DWH) side and Finance. Xero-based reporting is generally ok, with some small discrepancies that have minimal impacts and can partially be explained. However, a massive discrepancy on Guest Revenue (Waivers, Deposit Fees, Guest Products) is detected since Data is reporting it with taxes included, while Finance seems to be reporting it without taxes. This generates discrepancies within Data reporting, in the sense that Xero-based reporting/metrics are usually tax exclusive. The issue is communicated on 24th July to all users
+
+[Data quality assessment: DWH vs. Finance revenue figures](Data%20quality%20assessment%20DWH%20vs%20Finance%20revenue%20fig%206e3d6b75cdd4463687de899da8aab6fb.md)
+
+# Grand Welcome invoicing
+
+GW is a franchise. They have 80 different accounts owned by their individual franchisees but they want them all to be billed as one.
+
+On 15th July, it was discussed with Finance (Suzannah and Jamie), Clay, Leo and Uri. Main idea would be to have the invoicing export by deal id. Thus meaning, these 80 franchisees would be linked to a single Deal Id. This might have an impact on the invoicing reporting that I (Uri) am not really aware off, thus no estimation on impact/how much time will it take has been provided. There’s going to be a follow up on this subject by beginning August.
+
+Some other subjects:
+
+- This is the famous account of the 9k duplicated bookings in March. In order to repay them for these problems, they have 2 free of charge months (thus gives us a bit of extra time). **Note: the subject of duplicated listings was re-opened by Clay on Monday 22nd**
+- Because of new pricing, they want to change from a listing-based charging type to a booking-based charging type. This is still another discussion that Leo/Clay need to have with the client, but of course this could impact the way it’s invoiced.
+- Potentially, it could be interesting to create somehow a “super user” that would be able to see the Dashboard for many “users” assigned to them. This was open discussion, not commitment.
+
+# eDeposit and Athena migration
+
+Ana wrote to me the day you started holidays. Apparently, there’s a CosmosDB migration that they wanted to do this sprint. In the refinement session it was discovered that this could impact the existing reporting we have on CosmosDB. Long story short, it seems the schema won’t change and it’s just the URL that changes.
+
+This might have an impact depending on how we’re retrieving the information on CosmosDB. We re-loop Ben R, and after discussion with Ray, seems ok to move forward since it would be a minimal impact on PBI reporting.
+
+Pending update. Information available in #api-data channel.
+
+# Data Priorities check (15th July)
+
+Only Suzannah and Uri were here on this edition. Topics discussed:
+
+- Finance top prios:
+ - Minimum listing fees subject
+ - Very important - Check in Hero will be rolled-out (offered) to all hosts that are interacting with Guests on Guest Journeys. Check in Hero will have a commission share with hosts, meaning that at the beginning of the new month, we’d need to pay back part of the Check in Hero revenue. Suzannah to send an e-mail with the details (she did not send the e-mail, but Ben C actually asked me Uri and Ben R on feasibilities - I said that this will take quite a bit of time and effort)
+- KPIs / Business Overview
+ - We’d need to do an exercise on revenue comparison between Business Overview and Finance reports. It seems there are some discrepancies. A potential explanation could be the currency exchange rates (for historical finance figures on guest payments vs. the ones reported now). **See point 2 - Revenue figures :) :) :)**
+ - Suzannah noticed (and I noticed as well) that a snapshot made on day D of a previous day Z can display different data on day D+X on the same Z day. I guess there’s some past update happening on the database that since we’re fully refreshing the KPIs is being missed. We need to investigate this. Partially investigated with revenues investigation
+ - Provide a possibility to chart metrics in the Main KPIs dashboard (done).
+ - They would like to see the Host split per Client type (1-10 listings, PMs 11-100 listings, Enterprise 100+ listings), Geography (mostly Country, to be discussed: if I’m a host located in England, but I have a Listing in the US, which one should I consider? B2B or B2C?). This has been discussed in the KPIs sessions, the details and recording being here: [https://www.notion.so/knowyourguest-superhog/Business-KPIs-Definition-III-TMT-session-24th-July-2024-1bd5435844ac432f9161b1ccf4c4d062](https://www.notion.so/Business-KPIs-Definition-III-TMT-session-24th-July-2024-1bd5435844ac432f9161b1ccf4c4d062?pvs=21)
+- Other
+ - We need to provide access to all Finance to the Account Report (done)
+
+# Product visibility - Data visibility
+
+Product has been working on creating general guidelines to present roadmaps and initiatives to the different business teams. After checking with Ben, we’re also supposed to do it.
+
+Now, since Data is a bit special, Lou A has helped on determining what should we apply and what not. In a nutshell:
+
+- We need to discuss with Ben and Suzannah on priorities again because (as you see in this list) a lot of things happened in just 2 weeks. With this we can adapt the roadmap
+- We should adapt each item in the roadmap, ideally filling a bit more the description. The description template could be useful for us as well.
+- Lou A showed me the resolution roadmap she has with bigger timelines / not fully specified over time. I think this is a nice way to say “hey guys, we will do these during Q3, but I don’t commit to do this in a given week”. Might be interesting to apply a similar strategy for Data
+- Record a 2 minute video explaining how to interpretate the Data roadmap, but not the contents of it
+- Have a session with business teams, but open to everyone, explaining a bit more the details of what we aim to do in Q3 or in the future. This should be recorded. It can be just a matter of 15 min prez + questions
+
+# Billing automation
+
+Within the new dashboard initiative, there’s the goal to do an automatic billing. We did a first kickoff on the discovery phase with Product (Dagmara - leading initiative, Lou D), Finance (Suzannah, Nathan, Jamie) and Tech (Ben R., Gus). Dagmara will need support from Data side to ensure that we can list the different data points used, the current process and so on.
+
+Dagmara has created a very nice summary with the steps that will follow: [https://www.notion.so/knowyourguest-superhog/Discovery-Plan-New-Dashboard-V3-Automated-Billing-940eb16d61684a4b9d2fca1001a127ea](https://www.notion.so/Discovery-Plan-New-Dashboard-V4-Automated-Billing-940eb16d61684a4b9d2fca1001a127ea?pvs=21)
+
+There’s also a slack channel #proj-automated-billing
+
+# Billable bookings
+
+While working on billable bookings, I started taking a look at the data-invoicing-exporter project. There’s a couple of differences that we might need to discuss, specially on the fact that charges that happen when the verification starts now uses a different logic in the data-invoicing-exporter (guest user joined date) vs. DWH in booking_charge_events (guest used link date, the estimated start date).
+
+All details here:
+
+[Data quality assessment: Billable Bookings](Data%20quality%20assessment%20Billable%20Bookings%2097008b7f1cbb4beb98295a22528acd03.md)
+
+# Booking source field
+
+Based on a Joan request (and actually something that interested as well Lou D, and probably other people), we’ve developed a new Booking source field that has been propagated within DWH. Might be worth that you do a double check just to verify all is ok.
+
+# Data incoherence on guest choices
+
+Based on a request from Lou A where they want to know how many guests choose no cover over other payment options presented to them. Using a query provided by Lawrence E:
+
+**`select** *,`
+
+**`CASE** **WHEN** DisabledValidationOptions & 1 > 0 **THEN** 0 **ELSE** 1 **END** **AS** *"Fee(1)"*,`
+
+**`CASE** **WHEN** DisabledValidationOptions & 2 > 0 **THEN** 0 **ELSE** 1 **END** **AS** *"Membership(2)"*,`
+
+**`CASE** **WHEN** DisabledValidationOptions & 4 > 0 **THEN** 0 **ELSE** 1 **END** **AS** *"FeeWithDeposit(4)"*,`
+
+**`CASE** **WHEN** DisabledValidationOptions & 8 > 0 **THEN** 0 **ELSE** 1 **END** **AS** *"Waiver(8)"*,`
+
+**`CASE** **WHEN** DisabledValidationOptions & 16 > 0 **THEN** 0 **ELSE** 1 **END** **AS** *"NoCover(16)"*`
+
+**`from** live.dbo.PaymentValidationSetToCurrency`
+
+We found that there are some cases were it’s not making sense between the offered choices, according to this data, and the chosen ones by the guests. We have already bring this data incoherence up with Lawrence and he is currently working on trying to find the problem and hopefully soon have a solution.
+
+# Screening API Report ready for deployment
+
+We have the new report for Screening API ready to go as soon as it is needed
+
+[Link](https://app.powerbi.com/groups/me/apps/043c0aec-20b8-4318-9751-f7164b3634ad/reports/c69e3d40-a669-4dc3-899e-dbc84a0c6c24/ReportSectionbd92a560d1aa856ba993?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md:Zone.Identifier b/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024-07-26 - Glad you’re back, Pablo f40e0ea62143420d96b409f8f78e9fd9.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md b/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md
new file mode 100644
index 0000000..02383c8
--- /dev/null
+++ b/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md
@@ -0,0 +1,5 @@
+# 2024-08-20 - Glad you’re back, Uri
+
+- A peculiar PR that reduced the execution time of `int_core__mtd_booking_metrics` from 1100 seconds to 10 seconds.
+ - [https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2774](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2774)
+- I started out a list of interesting tools: [Cool tools](Cool%20tools%20afdf8f69b4b0498aaee66ad1a520cc0d.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md:Zone.Identifier b/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024-08-20 - Glad you’re back, Uri cc3bb68690e04a5f952d0dd78d5abbef.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md b/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md
new file mode 100644
index 0000000..974d52a
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md
@@ -0,0 +1,84 @@
+# 2024-10-02 - Integrating New Dashboard & New Pricing into DWH
+
+List of Core tables linked to New Pricing (NP) / New Dashboard (ND)
+
+| Table Name | Description | Main fields | Status | DWH usages | Uri’s comments |
+| --- | --- | --- | --- | --- | --- |
+| Claim | Not exclusively for NP/ND, but used to know which users have been switched in different stages from Old Dashboard to New Dashboard | - UserId
+- ClaimType
+- ClaimValue | Fully integrated, might need updates | We apply a macro based on the content of this table:
+- [user_migration_configuration](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/macros/user_migration_configuration.sql)
+This is later used in the [user_migration](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__user_migration.sql) model to identify migrated users from old to new dash. This is later added into the main table of [user_host](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__user_host.sql) | Likely we’ll need updates as new versions of the dashboard are launched. Should be covered already for MVP and V2.
+
+Not clear how to tag “new dash users” for those cases that the user gets directly created into the new dash (instead of switched). To be confirmed
+- Quick discussion with Daga: what if we take 1) Claim to know which user is in New Dash and 2) Claim to know which user has been switched and when. Then 1)-2) is new users in new dash, and the creation date of the user is the new “start date”. |
+| ProductBundle | Basic information of a product bundle | - Id (ProductBundleId)
+- ProtectionPlanId | Not integrated, will not integrate | | Not needed for the moment since UserProductBundle already contains denormalised information of the product bundle (ex: name, protection plan id) |
+| ProductBundleDescription | Description of the product bundle | - ProductBundleId | Not integrated, will not integrate | | Not needed for the moment since it only contains an explanation of what the product bundle means for client point of view |
+| UserProductBundle | It’s the main table: it states that the user has, or has had, the capacity to apply product bundles into a listing. This does not mean however that these are/were actually applied. A bundle contains one or more product services and has a certain protection plan. | - Id (UserProductBundleId)
+- SuperhogUserId
+- ProductBundleId
+- ProtectionPlanId
+- ChosenProductServices
+- StartDate
+- EndDate | Fully integrated, might need updates | Main usage in the model:
+- [user_product_bundle](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__user_product_bundle.sql) | Not all users in this table are in New Dashboard. Thus, specifically for New Dash reporting, we force users to exist in the user_migration model. Also, we create an effective start date of a product bundle so this start date is not before the user switched from old to new dash. |
+| AccommodationToProductBundle | Another main table: it states that a listing has, or has had, a product bundled applied; thus affecting the bookings of that listing with the specific product bundle. | - Id (AccommodationToProductBundleId)
+- UserProductBundleId
+- AccommodationId
+- StartDate
+- EndDate | Fully integrated, no need to update | Main usage in the model:
+- [accommodation_to_product_bundle](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__accommodation_to_product_bundle.sql) | Important - this table will NOT contain basic screening product bundle. This is because this bundle is by default. Thus only product bundles different to the basic screening can be applied. Also, we create an effective start date of a product bundle so this start date is not before the user configured the bundle (see UserProductBundle comment) |
+| BookingToProductBundle | States that a booking has had a product bundle (well, a user product bundle) applied. Thus this can be used to know the product and protection services that were offered (not necessarily those that finally applied). | - Id
+- UserProductBundleId
+- BookingId | Fully integrated, no need to update | Main usage in the model:
+- [booking_to_product_bundle](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/core/int_core__booking_to_product_bundle.sql&version=GBmaster&_a=contents) | Unsure of why we have StartDate and EndDate in this table. Not using it 😀
+
+We also enforce that the user needs to have had the product bundle configured before the booking was created (effectively meaning that we exclude bookings from migrated users that were created before the migration date). |
+| ProductService | Basic information of the product service | - Id (ProductServiceId)
+- ProductServiceFlag | Integrated in staging, needs further modelisation | Integrated into staging:
+- [product_service](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/staging/core/stg_core__product_service.sql) | |
+| ProductServiceToPrice | Basic information of product services and their prices. It states that a given product service will have a certain price for a given currency, with a given price base unit, with a certain invoicing trigger and a specific payment type. Additionally, it will state if this price is a default one or a dedicated one for a given user, in case UserProductBundleId is set. | - Id (ProductServiceToPriceId)
+- ProductServiceId
+- CurrencyId
+- UserProductBundleId
+- BillingMethodId
+- InvoicingMethodId
+- PaymentTypeId
+- StartDate
+- EndDate
+- Amount | Integrated in staging, needs further modelisation | Integrated into staging:
+- [product_service_to_price](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/staging/core/stg_core__product_service_to_price.sql) | We do directly the denormalisation of the attributes BillingMethodId, InvoicingMethodId and PaymentTypeId at the staging layer. Also, there’s the following rename:
+- BillingMethod = price_base_unit
+- InvoicingMethod = invoicing_trigger
+- PaymentType remains the same |
+| BillingMethod | Whether the price of the ProductServiceToPrice is at Booking level or per number of nights | - Id (BillingMethodId) | Fully integrated, no need to update | Integrated directly into the ProductServiceToPrice staging layer as price_base_unit | |
+| InvoicingMethod | When the service needs to be invoiced, at which moment of time | - Id (InvoicingMethodId) | Fully integrated, no need to update | Integrated directly into the ProductServiceToPrice staging layer as invoicing_trigger | |
+| PaymentType | Whether the price is stated as an Amount or as Percentage | - Id (PaymentTypeId) | Fully integrated, no need to update | Integrated directly into the ProductServiceToPrice staging layer as payment_type | |
+| Protection | Basic information of the Protection Services | - Id (ProtectionId)
+- RequiredProductServices | Integrated in staging, needs further modelisation | | Seems a 1 to 1 relation with ProtectionPlan. If so, I’ll just add everything into a single protection_plan table |
+| ProtectionPlan | Historification in case there’s changes on any Protection | - Id (ProtectionPlanId)
+- ProtectionId
+- StartDate
+- EndDate | Integrated in staging, needs further modelisation | | |
+| ProtectionPlanToPrice | Similar contents as ProductServiceToPrice, but for Protection. In essence, how much it costs to have a dedicated protection (for a given currency, price base unit, invoicing trigger, payment type). Also if it’s a default price or a dedicated one for a given user, in case UserProductBundleId is set | - Id (ProtectionPlanToPriceId)
+- ProtectionPlanId
+- CurrencyId
+- UserProductBundleId
+- BillingMethodId
+- InvoicingMethodId
+- PaymentTypeId
+- StartDate
+- EndDate
+- Amount | Integrated in staging, needs further modelisation | | We should follow a similar strategy as for ProductServiceToPrice
+
+Maybe rename internally as ProtectionServiceToPrice? Avoid confusion with ProtectionPlanToCurrency |
+| AppliedProductService | Key table to know “this Booking has these Product Services applied”. Currently WIP in backend side, necessary for Revenue computation and Service usage | - TBD | To be integrated, waiting for backend | TBD | We asked for additional id fields so we can link the information with other main tables easily. Also, see if we can follow an insert only approach to keep the history. |
+| AppliedProtectionService | Similar as AppliedProductService but for Protection. Does not exist yet | - TBD | To be integrated, waiting for backend | TBD | We asked to have this table created so we can have a similar strategy as we will do for AppliedProductService |
+| ProtectionPlanToCurrency | How much we protect per Protection Service and Currency. This contains protections itself, rather than prices for the protections, thus can wait for later. | - Id (ProtectionPlanToCurrencyId)
+- ProtectionPlanId | Integrated in staging, needs further modelisation | | I think I will add a different name to avoid confusions with ProtectionPlanToPrice. |
+| DepositManagement | | | ? | | Deposit management is the nomenclature used for Waivers/Deposits services, but these exist in ProductServices and I’m not sure the following tables have a direct link or are strictly needed for our reporting. |
+| DepositManagementItem | | | ? | | |
+| DepositManagementItemToProtection | | | ? | | |
+| DepositManagementItemToProtectionAmount | | | ? | | |
+| | | | | | |
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md:Zone.Identifier b/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-02 - Integrating New Dashboard & New Prici 1130446ff9c9804a9cb2f5d49e073bab.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md b/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md
new file mode 100644
index 0000000..7a6abab
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md
@@ -0,0 +1,45 @@
+# 2024-10-03 - Glad you’re back, Ben
+
+Data team is very happy that you’re back Ben! Find below a list with links towards the main aspects that might be relevant for you regarding the Data scope in these past 2 months.
+
+> **Table of contents**
+>
+
+# Q3 recap
+
+Probably the easiest to keep an eye on what happened lately it’s just checking our Q3 Achievements page, in which we summarise both the status of the objectives we set at the beginning of the quarter as well as new lines of work that popped up.
+
+[Q3 Data Achievements ](Q3%20Data%20Achievements%201130446ff9c9800e84e4f03750b752a1.md)
+
+As usual, the Data OKRs are available in our dedicated Notion page:
+
+[Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+For more in-depth, we also suggest checking the latest entries to the Data News:
+
+[Data News](https://www.notion.so/Data-News-7dc6ee1465974e17b0898b41a353b461?pvs=21)
+
+# Q4 planning
+
+We did an exercise with Ana and the TMT to plan for Q4. The draft for Q4 OKRs is available in the dedicated Notion page:
+
+[Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+In order to provide a bit more overview of what each initiative implies, there’s a more verbose page that we shared with the TMT for our quarterly planning meeting:
+
+[Q4 Data Scopes proposal](Q4%20Data%20Scopes%20proposal%2075bf38ab8092471d910840ab86b0ec60.md)
+
+At the moment of writing this Notion page we still have pending an update on the roadmap for the Q4.
+
+# Recap of Power BI reports
+
+This is just an extensive list of all available Power BI apps, including existing ones + new ones. Let us know if you’re missing access to some of these.
+
+- [Business Overview](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/ReportSectionddc493aece54c925670a?experience=power-bi) (contains Revenue reports and Main KPIs)
+- [Check-in Hero](https://app.powerbi.com/groups/me/apps/14859ed7-b135-431e-b0a6-229961c10c68/reports/8e88ea63-1874-47d9-abce-dfcfcea76bda/ReportSectionddc493aece54c925670a?experience=power-bi)
+- [Currency Exchange](https://app.powerbi.com/groups/me/apps/10c41ce2-3ca8-4499-a42c-8321a3dce94b/reports/fcfd0a77-6c2a-4379-89be-aa0b090265d7/64ddecd28ca50dc3f029?experience=power-bi)
+- [Superhog reporting (legacy)](https://app.powerbi.com/groups/me/apps/86bd5a07-0cd9-40ab-9e97-71816e3467e8/reports/fe54c090-ae85-4cfd-9f28-3d31ab486bc3/ReportSectiond82bb2cfdd980be42da5?experience=power-bi)
+- [Guests Insights](https://app.powerbi.com/groups/me/apps/2464d25c-056c-4b94-9a7f-26b72c7fde33/reports/b6ff2cf4-5abb-4c1b-9341-b6f2dae04900/2f768051ca6abb70b39a?experience=power-bi) (contains Guest satisfaction CSAT score for the moment)
+- [Accounting](https://app.powerbi.com/groups/me/apps/4a019abb-880f-4184-adc9-440ebd950e00/reports/86abbd2f-bfa5-4a51-adf5-4c7a3be9de07/b992edecc5478e506a75?experience=power-bi) (contains Host Resolutions + Invoicing and Crediting)
+- [API Reports](https://app.powerbi.com/groups/me/apps/043c0aec-20b8-4318-9751-f7164b3634ad/reports/c69e3d40-a669-4dc3-899e-dbc84a0c6c24/ReportSectionbd92a560d1aa856ba993?experience=power-bi) (contains Screening API and E-Deposit Invoice)
+- [New Dashboard Reporting](https://app.powerbi.com/groups/me/apps/7197c833-dbf9-4d2c-bca1-95f74aec4b11/reports/f0bad5b7-d9d2-45ba-a3cb-d190dd91b493/1bbfbee419e040409b95?experience=power-bi) (contains User adoption of New Dash, currently MVP)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md:Zone.Identifier b/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-03 - Glad you’re back, Ben 1130446ff9c98007af11c24731bd2ac7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md b/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md
new file mode 100644
index 0000000..95cc2cd
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md
@@ -0,0 +1,61 @@
+# 2024-10-24 - Glad you’re back, Joaquin
+
+Pablo and Uri hope you had an amazing holidays. Some things happened when you were not here, so here’s a summary!
+
+# Domain Analysts programme has started
+
+We had the first session with Jamie and Alex and explained a bit what we aim to achieve during this Q4 - as well as given them some SQL homework to do!
+
+There’s a new slack channel named #analyst-guild in which we can discuss directly with them and you will find more relevant information in there. Check this [Notion page](https://www.notion.so/Q4-Training-and-Onboarding-Plan-1210446ff9c980cb9eb1c2e1895c0f46?pvs=21) to learn more.
+
+# Athena claims analysis
+
+Pablo did some not planned work yet very critical for Athena. Apparently, it was assumed that Athena was a good source of cash for us, but it seems the amount paid out for claims is huge. After further checks, it seems that the majority of critical claims come from just a few claimants, and thus a re-negotiation has been started by key people of the company. A very good example of why we need Data!
+
+# E-deposit migration was a great success
+
+After some weeks preparing the migration with API squad, now we have two independent flows to feed E-deposit vs. Athena. Everything went according to plan. This effectively means that the current status looks like this:
+
+E-deposit:
+
+
+
+Athena:
+
+
+
+# Hubspot deal data is integrated and being used
+
+We focused on integrating Deal data as soon as possible as we had some max priority needs for Account Managers reporting and Churn definition. Among the different Hubspot entities, we focused first on Deal. This data is already being used in KPIs and new models, as can be seen here:
+
+
+
+We’ll discuss on what’s next for the remaining entities, but so far, this has proven to be enough and already very valuable, as you can see in following entries.
+
+# Churn definition
+
+A big subject has been to define Revenue, Listing and Booking Churn Rates. We did this exercise with Suzannah, Matt and Alex.
+
+In short, we assume Revenue, Listing and Booking Churn to be coming from accounts that are churning. In other words, from Deals being in a Churning state (which can only be in 1 month before becoming Inactive).
+
+First things first, we improved the logic for when we’re considering a Deal to be Churning. We keep either the already existing definition (i.e., a Deal is Churning if the last booking created was exactly 13 months ago) OR a Deal has offboarded in a given month. This offboarding information comes from Hubspot, from the cancellation date attribute. This is one of the changes that can be seen in the previous screenshot, in which int_mtd_deal_lifecycle now has a dependency with Hubspot deals. You might notice as well that this model is no longer in Core, but in Cross since it has both Hubspot and Core dependencies.
+
+Second thing - we need to have a proper computation of Revenue. If you remember, before Revenue was deducting the amount that we were paying to hosts in terms of waivers. This is not the case anymore, meaning total revenue figures are closer to the Finance definition (and bigger than before). This has been already deployed for a couple of weeks.
+
+Also we’ve created new contribution models that allow us to know the % of Revenue, Listings Booked in Month and Created Bookings each Deal has in a 12 month window. This is a bit more complex since we’re not doing an Additive approach but rather an Average one, because of business needs in the definition itself. It’s a bit complex so I encourage you to check the model implementation if you’re interested [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_12m_window_contribution_by_deal.sql). This “by deal monthly” computation is then used to compute the Main KPIs, meaning that now we have a strict dependency on Global KPIs depending on Monthly KPIs by Deal. With this we have a final model that computes the Churn contribution [here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_churn_metrics.sql).
+
+These 3 Churn Rates are already deployed since Tuesday 22nd and available in Main KPIs.
+
+# Churn prevention → top losers → Account managers reporting
+
+Another piece of work related to churn. In this case it’s not focusing on measuring churn, but rather, providing indicators for each account in terms of “growth” and “impact” so Account Managers and RevOps generally speaking can smartly dedicate effort towards where it’s really needed. This, if actioned by AM, should reduce Churn (thus why it’s churn prevention).
+
+Long story short, we have a [new report here](https://app.powerbi.com/groups/me/apps/bb1a782f-cccc-4427-ab1a-efc207d49b62/reports/797e7838-3119-4d0e-ace5-2026ec7b8c0e/cabe954bba6d285c576f?experience=power-bi). Originally it was called top losers (because we categorised accounts as top losers, losers, winners, etc) but now has grown a bit in scope so it’s just Account Managers Overview. This report gathers all accounts by deal and each month evaluates the growth and the impact this growth has upon the overall business. Aaaaand with this we just categorise accounts in 5 groups. I’d recommend to check the readme since it’s quite detailed, or, if you prefer 423 lines of SQL code, you can check [the model here](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project?path=/models/intermediate/cross/int_monthly_growth_score_by_deal.sql).
+
+Lastly, we’ve recently integrated some Hubspot information of each Deals so Account Managers and decision-makers have greater detail. For instance, we’re able to detect accounts that have not churned yet and still these are active, thus potentially actionable on AM side.
+
+It’s extremely useful to explain increases in Churn Rates in specific months - I’ll let you check August 2024 peak and get your own conclusions 🙂
+
+# General update
+
+We’ll discuss talk about this in the first meeting
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md:Zone.Identifier b/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024-10-24 - Glad you’re back, Joaquin 1270446ff9c9808bb60fd1e759ff421c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md b/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md
new file mode 100644
index 0000000..0d559bd
--- /dev/null
+++ b/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md
@@ -0,0 +1,48 @@
+# 20240611 Retro
+
+## 🙌 What went well
+
+- Priorities and capacity
+ - Data team has increased in capacity
+ - TMT has a lot more visibility and alignment with us
+ - We’ve done a good job at structuring demand and ****keeping pushy stakeholders at bay
+ - Adhoc/Data Captain deliveries have been flowing orders of magnitude better than in the Pablo-era
+- Team organization is working well
+ - Internal collaboration is quite smooth so far (we’re good people 🙂)
+ - VERY GOOD documentation in Notion, repositories, etc. on the Data stack
+ - The way we are organizing the team and distributing the responsibilities, I really appreciate the tools we are using like the board that makes very easy to keep track of everyone's assignments.
+ - I think the dailies are also very helpful to stay in contact and updated to what everyone is doing in their day to day work
+- I feel very comfortable with the team and the disposition of everyone to be very helpful inside and outside the office.
+
+## 🌱 What needs improvement
+
+- Development workflow in dbt / PBI could be more agile and frictionless
+- Stakeholder visibility/relationship with other teams
+ - Clear lack of data exposed / reported to business and product teams
+ - Backlog of engineering dependencies/topics is messy and we drop balls
+ - Data priorities and tempos visibility to the rest of the business
+- Data modelisation problems from the source (ex: guest journey end date needs tons of logic because it was not properly implemented, expected revenue figures, sources of Hosts, etc)
+- To have a documentation of all the data we can work with, maybe of the source tables.
+
+## 💡 Ideas for what to do differently
+
+- Tooling
+ - More hands-on development onboarding for Data
+ - A bit complicated to review PBI reports - ensure these are exposed in our workspace for Data team reviews?
+- More connection with the engineering team
+- Start running dbt tests in production
+- Reduce bus factor for Data Engineering
+
+## ✔ Action items
+
+- [ ] Formalize further the relationship between Data <> Engineering and dependencies
+- [ ] Backend documentation and know-how Productboard item
+- [ ] Simplify dumping of prd data to local environment
+- [ ] Add to backlog the creation of onboarding-hello-world-challenges
+- [ ] Discuss further hands in Engineering with Ben C.
+- [ ] Add `dbt test` to dbt run script
+- [x] Kidnap staging workspace to make delivery
+
+543057
+
+543057
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md:Zone.Identifier b/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240611 Retro 9b8bbbe210d04a55a753616c2fb0be2c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md b/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md
new file mode 100644
index 0000000..7edb315
--- /dev/null
+++ b/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md
@@ -0,0 +1,104 @@
+# 20240619-01 - CheckIn Cover multi-price problem
+
+This page is to track a production bug spotted on 2024-06-19, 12:30 ES time.
+
+The problem was solved on 2024-06-19, 17:30 ES time.
+
+## Executive Summary
+
+- I (Pablo) believe some pieces of data were manually modified in an unproper way in the Core Superhog database due to some tech debt and user mistakes we are living with.
+- This propagated into data quality issues in the DWH, which eventually led to wrong reporting in the Checkin Hero reporting and the Business Overview reporting suites, including inflated revenue numbers.
+- This specific problem will be fixed by the Data team with some engineering work, but we need to run a postmortem on how it happened and change our way of doing things to avoid **massive business problems in the future**. **I’m calling all of us to serious action to avoid more of this in the future.**
+
+## Initial problem
+
+- Pablo spotted a duplicate record in the DWH table `reporting_core__vr_check_in_cover` while running some data tests on the DWH. This table summarizes some details around guest journeys with checkin cover. The `VerificationRequestId` for these records is `749989`.
+- The duplicate records showed all fields with same values except for `checkin_cover_limit_amount_local_curr` and `checkin_cover_limit_amount_in_gbp`.
+
+## Root cause research
+
+- Pablo’s initial suspicion was a duplicate record in the `Payments` table causing the issue, but this was not the case.
+- Pulling the thread, Pablo found out that the DWH table `int_core__check_in_cover_prices` had two records for `EUR` and `CAD` .
+ - This issue was causing the original problem of duplicate records in `reporting.core__vr_check_in_cover` .
+ - This is because `int_core__check_in_cover_prices` is expected to have only one record per currency.
+ - This is so because `int_core__check_in_cover_prices` builds a record per price by grouping the `PaymentValidationSetToCurrency` by `CurrencyIso`, `CheckInCoverCost` and `CheckInCoverLimit` .
+- So, the next step was to find out why `int_core__check_in_cover_prices` was showing the duplicate records for `EUR` and `CAD`.
+- The source table for this data in the DWH is `sync_core.PaymentValidationSetToCurrency`.
+- Pablo ran the following query:
+
+ ```sql
+ SELECT *
+ FROM sync_core."PaymentValidationSetToCurrency" pvstc
+ WHERE ("CheckInCoverCost" != 11
+ AND "CurrencyIso" = 'EUR')
+ OR
+ ("CheckInCoverCost" != 13
+ AND "CurrencyIso" = 'CAD')
+ ```
+
+ Which yielded the following output:
+
+ ```
+ "Id","Fee","Amount","Waiver","Protection","Reschedule","CreatedDate","CurrencyIso","UpdatedDate","IsFeeRefundable","CheckInCoverCost","CheckInCoverLimit","PaymentValidationSetId","DisabledValidationOptions","_airbyte_raw_id","_airbyte_extracted_at","_airbyte_meta"
+ 29583,0.000000000,690.000000000,21.000000000,48.000000000,,2024-05-16 15:04:23.080,CAD,2024-05-16 15:04:23.080,false,690.000000000,130.000000000,3710,7,f13f779f-e054-465e-b301-aa38e88808e0,2024-05-16 18:00:12.917 +0200,"{""errors"": []}"
+ 31053,46.000000000,930.000000000,110.000000000,760.000000000,,2024-06-13 17:47:04.143,EUR,2024-06-13 17:47:04.143,false,14.000000000,130.000000000,3894,18,"43112786-986a-4cfa-aef6-135c1a1b5067",2024-06-13 21:00:13.103 +0200,"{""errors"": []}"
+ 31085,19.000000000,940.000000000,66.000000000,940.000000000,,2024-06-13 18:52:47.003,EUR,2024-06-13 18:52:47.003,false,14.000000000,130.000000000,3898,19,fbd15fa4-8691-41ea-a8d2-1edb82e4355f,2024-06-13 22:00:12.451 +0200,"{""errors"": []}"
+
+ ```
+
+- The results show that there are three records of `PaymentValidationSetToCurrency` that don’t have the *regular* values for `EUR` and `CAD`.
+ - This is a major issue, because there was a established contract that, even though CheckIn Cover cost and limit figures appear in different records per `PaymentValidationSet` , the price is supposed to be a global one for all of Superhog. This data breaks the contract.
+- The next question that was posed was: is this data looking the same in Superhog’s backend?
+- I ran the following query in the Core database, `Live`
+
+ ```sql
+ SELECT Id, CurrencyIso, Amount, Fee, PaymentValidationSetId, CreatedDate, UpdatedDate, IsFeeRefundable, DisabledValidationOptions, Waiver, Protection, Reschedule, CheckInCoverCost, CheckInCoverLimit
+ FROM live.dbo.PaymentValidationSetToCurrency
+ WHERE Id = 31053 OR Id = 31085 OR Id = 29583
+ ```
+
+ Which yielded the following output:
+
+ ```
+ "Id","CurrencyIso","Amount","Fee","PaymentValidationSetId","CreatedDate","UpdatedDate","IsFeeRefundable","DisabledValidationOptions","Waiver","Protection","Reschedule","CheckInCoverCost","CheckInCoverLimit"
+ 29583,CAD,690.00000,0.00000,3710,2024-05-16 15:04:23.080,2024-05-16 15:04:23.080,0,7,21.00000,48.00000,,13.00000,130.00000
+ 31053,EUR,930.00000,46.00000,3894,2024-06-13 17:47:04.143,2024-06-13 17:47:04.143,0,18,110.00000,760.00000,,11.00000,85.00000
+ 31085,EUR,940.00000,19.00000,3898,2024-06-13 18:52:47.003,2024-06-13 18:52:47.003,0,22,66.00000,940.00000,,11.00000,85.00000
+
+ ```
+
+- Major problem. Data is not looking the same.
+ - The `CheckInCoverCost` in `dwh.sync_core.PaymentValidationSetToCurrency` are `690, 14, 14`.
+ - The `CheckInCoverCost` in `live.dbo.PaymentValidationSetToCurrency` are `13, 11, 11`.
+- This is pointing to an issue in the Core <> DWH integration that happens through Airbyte.
+
+Summarizing the issues, from root to effects:
+
+- Some faulty `live.dbo.PaymentValidationSetToCurrency` values somehow came from Core to DWH, and were afterward changed in Core. This must have been done without respecting the `UpdatedDate` field of the table.
+- The faulty values broke the intended granularity of `dwh.intermediate.int_core__check_in_cover_prices`, which propagated into `dwh.reporting_.ore__vr_check_in_cover`
+- The issue in `dwh.reporting_.ore__vr_check_in_cover` caused (and it’s still causing) revenue and funnel numbers to show wrong stats. Basically inflating them artificially.
+
+## Remediation
+
+- Short-term: I will have to run a backfill between `live.dbo.PaymentValidationSetToCurrency` and `dwh.sync_core.PaymentValidationSetToCurrency` to ensure that the data across both is the same again.
+- Beyond that: we need to understand how this situation came to life and ensure it is not repeated. My (Pablo on the keyboard) **hypothesis** on what happened is the following:
+ - Someone modified the CheckIn Cover prices in Wilbur for some accounts, in the fields that should NOT be editable yet are (Joan and Lawrence can provide more details on this issue). Could have been an AM experimenting or trying to catter to some host needs perhaps?
+ - Someone realized this happened and somehow put the necessary dev resources to fix it in the database straight. I mean to say, they literally just brought back the database field values to what they should have been. This is in contrast with simply changing the setting in Wilbur again, which wouldn’t have realize solved the problem, for every time this changes are made in Wilbur, a new record gets created in `live.dbo.PaymentValidationSetToCurrency`, meaning the faulty values would still remain there. Given this behaviour, I’m pretty confident whoever worked on this understands the bad implications of having multiple prices per currency in that table, and decided to do this database changed consciously to avoid it.
+ - This was done without updating the `UpdatedDate` fields as the SQL `UPDATE` statement happened.
+ - Because of this, Airbyte didn’t pick up the changes and never brought the new data for those records into the DWH. This is because Airbyte syncs the data of table `live.dbo.PaymentValidationSetToCurrency` incrementally, by only brining over data that was modified since the last Airbyte run. Airbyte infers whether data was modified or not by looking at the `UpdatedDate` field. If the field is not respected when doing updates, Core and DWH end up out of sync.
+- I would like to emphasize the importance of preventing this type of issue. The errors caused by this instance were small, but this could turn into massive reporting mistakes. Furthermore, these are by nature very difficult to spot and troubleshoot, meaning that they could live on a long time, leading to TMT and other Managers relying on wrong reporting for their business decision making, investor reporting, etc.
+
+## Final reflection on the mistakes that got us here
+
+- First, we recycled the Cancellation Cover data model for the CheckIn Cover in a rushed way, resulting in Core’s data model being completely out of sync with the reality of the service (the data model allows different hosts and currencies to have different CheckIn Cover prices, when the business logic around this service is that there’s a single, Superhog-wide price for each currency).
+- Second, we allowed the UI of Wilbur to have fields that let users modify these values on a host level, which is again completely out of sync with our business logic because different hosts shouldn’t have different prices for this service, and no user should ever change that value.
+- Third, some user managed to use the UI-feature-that-shouldn’t-exist the wrong way to change the values, even though this should really not be done.
+- Fourth, someone modified the database to fix the third mistake, but introduced *another* mistake by failing to respect the `UpdatedDate` field in the process.
+
+This is a long story of tech debt and bad choices bringing us to a costly mistake. We were lucky it didn’t cause a big problem, but it could have. I hope we can all learn from this to avoid these issues.
+
+**2024-06-21 update**: after a discussion together with Lawrence, we found out what we think is the cause of the values being “corrected” in the Core database without respecting the `UpdatedDate` field.
+
+This data is being overwritten on every migration as part of the seeding process that the ream runs on deployments, replacing any existing values around the CheckIn Cover with the ones provided by the seed hardcoded values. The faulty values introduced by an user were most probably overwritten again once the team applied a new migration in the database.
+
+Besides that, it was an important finding since we also realized that this seeding process does not update the `UpdatedAt` fields.
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md:Zone.Identifier b/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240619-01 - CheckIn Cover multi-price problem fabd174c34324292963ea52bb921203f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md b/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md
new file mode 100644
index 0000000..160763a
--- /dev/null
+++ b/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md
@@ -0,0 +1,98 @@
+# 20240621-01 - Failure of Core full-refresh Airbyte jobs
+
+## Failure of Core full-refresh Airbyte jobs
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: dbt, Airbyte, dwh-prd
+- Started at: *2024-06-21 3:09AM CEST*
+- Detected at: *2024-06-21 3:09AM CEST*
+- Mitigated at: *2024-06-21 11:35AM CEST*
+
+Some tests around dbt materialization performed in production by Pablo on 2024-06-20, plus a lack of proper clean-up after them, derived in Airbyte failing to run full-refresh loads from Core due to not being able to delete the tables properly. This left the affected tables and their dependants in DWH for around 9 natural hours/4 business hours.
+
+## Impact
+
+The following tables were not refreshed on the nightly run of 2024-06-21, making data remain stale and outdate to the 2024-06-20 state:
+
+| Source | Schema | Table |
+| --- | --- | --- |
+| Core (SQL Server - Live) | `Integration` | `Integration` |
+| Core (SQL Server - Live) | `Integration` | `IntegrationType` |
+| Core (SQL Server - Live) | `dbo` | `Country` |
+| Core (SQL Server - Live) | `dbo` | `Currency` |
+| Core (SQL Server - Live) | `dbo` | `PaymentStatus` |
+| Core (SQL Server - Live) | `dbo` | `PricePlanChargedByType` |
+| Core (SQL Server - Live) | `dbo` | `User` |
+| Core (SQL Server - Live) | `dbo` | `UserVerificationStatus` |
+| Core (SQL Server - Live) | `dbo` | `VerificationPaymentType` |
+| Core (SQL Server - Live) | `dbo` | `VerificationStatus` |
+
+## Timeline
+
+Timezone: CEST
+
+| Time | Event |
+| --- | --- |
+| 2024-06-21 03:00 | A scheduled job (ID: 4544) of the Airbyte sync `Superhog - Live - integration → dwh-prd (Full Refresh)` begins. |
+| 2024-06-21 03:09 | After 5 failed attempts, job 4544 is marked as failed and a warning is sent to the Slack channel `#data-alerts` |
+| 2024-06-21 06:00 | A scheduled job (ID: 4552) of the Airbyte sync `Superhog - Live -dbo → dwh-prd (Full-refresh models)` begins. |
+| 2024-06-21 06:31 | After 5 failed attempts, job 4552 is marked as failed and a warning is sent to the Slack channel `#data-alerts` |
+| 2024-06-21 08:00 | The regular, scheduled `dbt run` happens normally. Since it’s running with the usual setting of materializing `staging` models as `table`, it destroy the dirty views that were left from the previous day. |
+| 2024-06-21 09:50 | Pablo picks up the alerts and research begins. |
+| 2024-06-21 11:21 | Pablo triggers the failed syncs manually. |
+| 2024-06-21 11:26 | The syncs have executed successfully. |
+| 2024-06-21 11:31 | Pablo triggers a `dbt run` manually. |
+| 2024-06-21 11:35 | The `dbt run` finishes successfully. |
+| | End of the incident. |
+
+## Root Cause(s)
+
+- Pablo ran some `dbt`, `staging` layer models in the DWH as views instead of as tables on 2024-06-20.
+- The views were left in the DWH.
+- The following full-refresh Airbyte jobs on the `sync_core` schema failed upon trying to run `DROP` on the `sync_core` tables that then had dependant views, for Airbyte is running `DROP`, not `DROP CASCADE`. This is not a problem usually since we materialize `staging` as `table`, because that creates no dependency relationship between the `sync_core` tables and their `staging` table counterparties.
+
+## Resolution and recovery
+
+- The scheduled `dbt run` job that ran on 8:00 CEST deleted the dangling views from the previous day and brought the DWH back to using tables in the `staging` layer.
+- Without the views in place, it was only necessary to manually trigger the failed Airbyte syncs again to bring the `sync_core` tables that had been outdated back to being up to date.
+- After that, the `dbt run` was manually triggered as well to bring all the dependant models back to being up to date.
+- All jobs ran successfully and the DWH was brought back to perfect state without issues.
+
+## **Lessons Learned**
+
+- What went well
+ - Alerts made sure we picked up the problem fast
+- What went badly
+ - The manual testing left dirt in the DWH
+- Where did we get lucky
+ - The context of the `dbt` project since Pablo did the testing until the next morning allowed for the upcoming scheduled `dbt run` to automatically remove all the undesired views from the DWH, making the job of resolving the incident as simple as re-running everything. But things could have been different and some views could have been left in the DWH, which would have made recovery more complex and error-prone.
+
+General lesson: don’t test in production.
+
+The tests Pablo was running on 2024-06-20 were related to how to turn our `staging` layer in DWH from being materialized as `table` currently to `view` instead. This incident was a small sample of how this wouldn’t work as simply as initially expected given the nature of Airbyte full-refresh behaviour, in combination with Postgres `DROP` and `DROP CASCADE` commands.
+
+The following Github issue shows other people having the same discussion: https://github.com/airbytehq/airbyte/issues/35386
+
+That discussion lead to new developments in Postgres which enabled the features that would be necessary to achieve our goal. This doc page explains the features: https://docs.airbyte.com/integrations/destinations/postgres?_gl=1*vst8mh*_gcl_au*MzcyNzc0OTUzLjE3MTc1MDM2MDI.#creating-dependent-objects
+
+It will be necessary to run version updates on Airbyte to achieve this.
+
+## Action Items
+
+- [ ] Design a way to easily replicate the production DWH in order to minimize the need to run tests there.
+- [ ] Update Airbyte to pick up the new versions of the Postgres connector and plan, test and implement the change of the `staging` layer materialization strategy from `table` to `view` properly.
+
+## Appendix
+
+Logs of the failed Airbyte sync jobs.
+
+[default_workspace_job_4552_attempt_5_txt](default_workspace_job_4552_attempt_5_txt.txt)
+
+[default_workspace_job_4544_attempt_5_txt](default_workspace_job_4544_attempt_5_txt.txt)
+
+Slack alerts by Airbyte:
+
+
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md:Zone.Identifier b/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240621-01 - Failure of Core full-refresh Airbyte 4b308fa051694afe89c8f7147ce5ed27.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md b/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md
new file mode 100644
index 0000000..c40fcf6
--- /dev/null
+++ b/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md
@@ -0,0 +1,46 @@
+# 20240709 Retro
+
+## 🙌 What went well
+
+- **Delivery**
+ - Huge advancements on reporting capabilities (KPIs, Xero, Check in Hero, currency conversion)
+ - Important/critical data captain subjects moving forward
+- **Stakeholders**
+ - Good organization and advancing with tasks, good feedback from outside
+ - Priorities setting and alignment with stakeholders is now much leaner and efficient
+ - Awareness campaign with engineers around breaking stuff in Core
+- **Internal**
+ - Data Engineering super fast survival training program
+ - High quality approaches to tough bones: incidents, refactors, etc.
+
+## 🌱 What needs improvement
+
+- **Platform**
+ - Data drift is happening and we have no scheduled full-refreshes
+ - Dbt run in production not displaying alerts
+ - PBI Licenses/Group permissions are a (invisible) ball of hair
+ - Full-refreshing in local reaches the error “could not resize shared memory segment "/PostgreSQL.4065550950" to 907743232 bytes: No space left on device”
+- **Documentation**
+ - Documentation on business KPIs, both technical (for Data) and broadly for consumers
+ - We’re dropping balls with some conventions (exposures documentation, keeping data catalogue up to date, etc)
+- Exploration of tables to check which ones have incomplete, outdated or wrong data
+- Not all stakeholders use the data request form still
+
+## 💡 Ideas for what to do differently
+
+- Hands-on knowledge sharing by diversifying working scopes (I feel we have clear ownerships of X products)
+- Capacity to focus without interruptions
+- Possibility of managing power bi active directories ourselves
+
+## ✔ Action items
+
+- [x] Comilona soon™️
+- [ ] Fix dbt alerts
+- [ ] Agree with Ben R. on a different way to manage permissions
+- [ ] Explore local environment postgres improvements
+- [ ] Create Ticket to document KPIs dbt area
+- [x] Checklists for dbt repo
+- [ ] and PBI repo
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] 90 minutes retros
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md:Zone.Identifier b/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240709 Retro 6c815a39840f408fbd935c4b3e937be3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md b/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md
new file mode 100644
index 0000000..d7b74ad
--- /dev/null
+++ b/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md
@@ -0,0 +1,71 @@
+# 20240718-01 - Xe.com data not retrieved
+
+# Xexe did not retrieve the data from xe.com
+
+Managed by: Uri
+
+## Summary
+
+- Components involved: [data-xexe](https://guardhog.visualstudio.com/Data/_git/data-xexe)
+- Started at: 2024-07-18 07:00 (local ES time)
+- Detected at: 2024-07-18 08:42
+- Mitigated at: 2024-07-18 16:50
+
+Xe.com subscription has been suspended because lacking of payment from Superhog side. This made the daily execution fail. Once the payment has been done, and after confirmation from xe.com team, the manual execution of the process worked well.
+
+## Impact
+
+Currency conversion rates on 17th July have not been retrieved. This means that any reporting containing revenue with currency conversion is not displaying highly accurate figures, but rather, is using the conversions from the previous available day (16th July). This only affects for those reports reading DWH that use backend conversion, Xero reporting is not affected. Specifically:
+
+- Currency Exchange report
+- Guest Payments report (Business Overview)
+- Main Business KPIs (Business Overview) - only Guest Payments related metrics
+- Check-in Hero Overview
+- Guest Satisfaction (Guest Insights) - not really affected since there’s no payment related metric
+
+Impact at the moment is relatively small in the sense that only one day of currency conversion is missing, but failure to fix it soon could increase the impact.
+
+## Timeline
+
+Timezone: CEST
+
+| **Time** | **Event** |
+| --- | --- |
+| 2024-07-18 07:00:06 | Xexe starts to run on version 0.1.0 |
+| 2024-07-18 07:00:09 | Error is raised by processes.py stating that “Didn’t find the fields of a good response” while running the healthcheck against xe.com API. |
+| 2024-07-18 07:00:13 | Xexe attempts to fetch the rates and fails to do so since the response seems empty, returning a python error on KeyError: ‘from’ |
+| 2024-07-18 07:00:13 | Alert is sent to #data-alerts channel |
+| 2024-07-18 08:42 | Alert is spotted by the Data Team |
+| 2024-07-18 08:48 | After checking the logs, it does not seem straight-forward at first glance. It’s clear that we do not have currency conversion data from yesterday, 17th of July 2024 |
+| 2024-07-18 08:54 | A message has been sent to the channel #data to inform that there’s an incident ongoing around currency conversion |
+| 2024-07-18 09:18 | At this stage seems clear that the healthcheck perform vs. xe.com is the main issue. Maybe the API has been temporarily down, for whatever reason. I’m not able to see in xe.com if there’s an API availability, so I’m not able to make sure this is the reason. At this stage, I’ll opt for a single re-run and see what happens. |
+| 2024-07-18 09:20 | A re-run is launched, but fails again. The alert is correctly sent to #data-alerts channel. Same error is displayed. |
+| 2024-07-18 09:33 | After discussing with Ben R, it seems the problem comes from the billing. A couple of emails have been already shared with Pablo on this subject according to Ben. Ben is going to take a look at it. At this stage, nothing else I (Uri) can do but wait. |
+| 2024-07-18 09:56 | Gus forwarded me the email loop from Xe.com, indeed it’s clearly linked to the billing. |
+| 2024-07-18 10:30 | Ben R confirms that the invoice has been settled now. We try a re-run. |
+| 2024-07-18 10:35 | Re-run fails with the same error. Maybe the re-activation of our account needs to be done manually from xe.com side |
+| 2024-07-18 11:11 | A follow up communication to #data channel has been sent with the details on the root cause and more detailed impacts |
+| 2024-07-18 11:13 | A follow up e-mail is sent by Ben R to the original email loop from xe.com, asking for re-activation now that it has been paid |
+| 2024-07-18 16:17 | We receive e-mail confirmation from xe.com that the account has been reinstated |
+| 2024-07-18 16:43 | A new re-run of xexe process is launched, this time finished successfully |
+| 2024-07-18 16:46 | Re-run of DWH to update all tables and reports |
+| 2024-07-18 16:50 | A couple of checks are done to ensure data has been updated accordingly. All good, we can consider the incident as mitigated |
+| 2024-07-18 16:54 | A final communication to #data channel has been sent communicating the mitigation of the incident |
+
+## Root Cause(s)
+
+There has been a suspension of the service from lack of payment from our side. Email loop shows that there has been communication from Xe.com on this subject on 26th June, a reminder on July 8th and a final communication on 15th July. These emails were sent to [tech@guardhog.com](mailto:tech@guardhog.com) and unnoticed by the Data team - at least Uri/Joaquín, the forward of this e-mail to Pablo was unnoticed since Pablo was on holidays.
+
+## Resolution and recovery
+
+Billing has been settled on the same day as the incident was raised. Once we got confirmation from xe.com that the account has been reinstated, re-running the daily process manually worked perfectly.
+
+## **Lessons Learned**
+
+To be filled later on
+
+## Action Items
+
+To be filled later on
+
+##
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md:Zone.Identifier b/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240718-01 - Xe com data not retrieved 5c283e9aa4834323b38af0bff95477a5.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md b/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md
new file mode 100644
index 0000000..41dbe7c
--- /dev/null
+++ b/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md
@@ -0,0 +1,61 @@
+# 20240819 Retro
+
+## 🙌 What went well
+
+- **Holidays reliability**
+ - Surviving even without the whole team
+ - Survived holidays without issues
+- **Methodology**
+ - We keep on having a lot of freedom and we are using it nicely
+ - Quality and methodology stays high
+ - Capacity to investigate new tools/methodologies (CosmosDB integration, Metric Flow)
+ - More contact with development team
+ - Keeping up with our documentations and pressing other teams to do the same
+ - We are pushing the documentation culture and leading with example
+- **Stakeholders**
+ - Product initiatives should be now estimated and prioritised based on Revenue with the help of the Data team
+ - Increased access to Business Overview for PMs
+ - Our customers are very happy with us and our work is appreciated
+ - Company is starting to appreciate that data is not the owner of invoicing
+
+## 🌱 What needs improvement
+
+- **Dev env, data infra**
+ - Capacity to run models in local and not running out of memory
+ - Lack of automation around tests, CI, manual stuff
+ - Local development keeps on being a bit of a pain in the ass
+ - Data platform is growing a lot of mushroom components
+- **Priorities/Backlog**
+ - We're going to have a lot of shadow work this quarter with New Dashboard and APIs: we should make it more visible towards TMT
+- **People doing crappy stuff**
+ - Still some shitty initiatives are happening on top of supposedly “well built projects” (Grand Welcome Invoicing, MVP launch with bugs and without documentation, issues with check-in hero, etc.)
+ - We are pushing the documentation culture and leading with example
+ - Lack of technical documentation from development team, specially impacting on holidays period
+ - Product/Engineering has made failures and used bad methodologies in different ways that will cost us expensive
+ - Incidents go unnoticed generally on backend side
+- Reduce bus factor on key projects (for instance, invoicing)
+- Lack of synchrony with some initiatives: New Dash MVP misunderstanding on deliverables, Revenue figures mismatch took quite a bit of time to align with Finance
+
+## 💡 Ideas for what to do differently
+
+- ~~Having access to all documentation from development team (confluence)~~
+- Treat Backlog/Todos columns in board with a bit more respect (bi-weekly grooming?)
+- Ensure that there’s minimum description and DOD on tickets
+
+## ✔ Action items
+
+- [x] Run invoicing for september all together holding hands
+ - [x] Invite sent
+- [x] Run MainKPIs training sessions for PMs/Other audiences
+- [x] Set bi-weekly grooming session
+- [x] Plan Data <> Tech team council of wise men on a quarterly basis
+- [x] Document progress towards quarterly goals (emphasis on unplanned work)
+- [x] Fix dbt alerts
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [x] Explore **local** environment postgres improvements
+- [x] Create Ticket to document KPIs dbt area
+- [x] Checklists for dbt repo
+ - [x] and pbi repo
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [x] 90 minutes retros
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md:Zone.Identifier b/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240819 Retro 88ed749ed43b4eb7a2d277ddd2b03747.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md b/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md
new file mode 100644
index 0000000..165301e
--- /dev/null
+++ b/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md
@@ -0,0 +1,68 @@
+# 20240821-01 - SQL Server connection outage
+
+# SQL Server connection outage
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: Core, Airbyte
+- Started at: 2024-08-21 between 8:05 and 9:00, CEST
+- Detected at: 2024-08-21 9:00 CEST
+- Mitigated at: 2024-08-21 9:28 CEST
+
+Core database had a database user named `SuperhogProductionRO` that was used by many data services to read from it. Ben R. deleted the user. Afterwards, two incremental EL runs of Airbyte failed due to the Airbyte reader jobs not being able to connect to Core. The situation was fixed by recreating the user and re-running the failed EL jobs.
+
+## Impact
+
+Almost none. DWH and Core drifted for ~90 minutes instead of the usual 60 minutes. Data team was unable to connect to Core for around 30 minutes.
+
+## Timeline
+
+All reported times are in CEST timezone.
+
+| Time | Event |
+| --- | --- |
+| 2024-08-21, sometime between 08:05 and 09:00 | Ben R. deletes the `SuperhogProductionRO` user from Core. |
+| 2024-08-21 09:00 | Scheduled airbyte job with ID 8968, for connection `Superhog - Live - dbo → dwh-prd (Incremental models)` fails with error `Failure reason: State code: S0001; Error code: 18456; Message: Login failed for user 'SuperhogProductionRO'. ClientConnectionId:142369f2-c0b1-47e0-a97b-ead406196f5f`.
+
+Logs for this job are attached below. |
+| 2024-08-21 09:05 | Scheduled airbyte job with ID 8969, for connection `Superhog - Live - survey → dwh-prd (Incremental)` fails with error `Failure reason: State code: S0001; Error code: 18456; Message: Login failed for user 'SuperhogProductionRO'. ClientConnectionId:d353592c-421a-4bf6-bd8d-087a599d0f61`.
+
+Logs for this job are attached below. |
+| 2024-06-21 09:20 | Pablo and Ben R. jump on a call to troubleshoot, Ben communicates that the user was deleted. Both agree to recreate the user with the same credentials to recover services and Ben does that on the spot. |
+| 2024-06-21 09:28 | Pablo manually triggers syncs for both Airbyte connections that fail, and both run successfully. |
+| | End of the incident. |
+
+## Root Cause(s)
+
+Ben R. deleted the `SuperhogProductionRO` user from the Core database without any notification to the Data team.
+
+## Resolution and recovery
+
+`SuperhogProductionRO` was recreated as it was before deletion.
+
+Failed EL jobs that failed during the outage were re-executed.
+
+## **Lessons Learned**
+
+- What went well
+ - Alerts were immediate and pointed to the problem clearly.
+ - We responded in record time.
+- What went badly
+ - We had a clear organizational misalignment regarding how that DB user was being used.
+- Where we got lucky
+ - On Ben R. being available a few minutes after the issue started. Data team could not have recovered from the incident without his assistance.
+
+## Action Items
+
+- [ ] Develop some documentation/alignment with tech team to document the user dependencies towards the Data team to avoid unplanned changes like this to cause issues.
+- [ ] Internally, in the Data team, keep some documentation on what are relevant credentials and where are they used to ease the change of them. If we had had to change the credentials everywhere where `SuperhogProductionRO` was being used, we would have hard to basically recall from memory what are all those places. Having a documented list of where the user gets used would have eased the job of changing the credentials, increasing speed and confidence in the recovery.
+
+## Appendix
+
+Failed jobs log files:
+
+[default_workspace_job_8969_attempt_1_txt](default_workspace_job_8969_attempt_1_txt.txt)
+
+[default_workspace_job_8968_attempt_1_txt](default_workspace_job_8968_attempt_1_txt.txt)
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md:Zone.Identifier b/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240821-01 - SQL Server connection outage ba5caf5ba10e438a8393f63838367ad9.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md b/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md
new file mode 100644
index 0000000..f8c3501
--- /dev/null
+++ b/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md
@@ -0,0 +1,68 @@
+# 20240902-01 - Missing payment details in intermediate
+
+# Missing payment details in intermediate
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: dbt run, dbt test and airbyte
+- Started at: Unknown, probably months ago
+- Detected at: 2024-08-31, 08:16AM CEST
+- Mitigated at: 2024-09-02, 11:57AM CEST
+
+The simultaneous trigger of `dbt run` and Airbyte’s incremental jobs at the same time every morning had been leading to data integrity issues in the DWH for a long time. Our current release of scheduled `dbt test` just a few minutes after our scheduled `dbt run` made the problem obvious and we finally got to fix it.
+
+## Impact
+
+For months, a handful of verification payments were missing money amounts, both in local currency and GBP. This might have made total revenue figures in some reports insignificantly wrong (deviations being smaller than 0.1%).
+
+## Timeline
+
+All reported times are in CEST timezone.
+
+| Time | Event |
+| --- | --- |
+| Some time around March 2024 | Pablo implements the `int_core__verification_payments` model in the `dbt` project. |
+| Some time around August 2024 | As we work on implementing scheduled `dbt` tests, we recurringly observe issues with some `not null` tests on model `int_core__verification_payments` . The behaviour is flaky, so no special attention is paid at first. |
+| 2024-08-31 08:16AM | A `dbt test` fails, showing some null value issues in money related columns in the model `int_core__verification_payments` |
+| 2024-09-01 08:16AM | A `dbt test` fails, showing some null value issues in money related columns in the model `int_core__verification_payments` |
+| 2024-09-02 08:16AM | A `dbt test` fails, showing some null value issues in money related columns in the model `int_core__verification_payments` |
+| 2024-09-02 09:00AM | Data team notices the issue (previous alerts happened on the weekend) and starts investigating. |
+| 2024-09-02 11:30AM | Pablo spots the possible issue and confirms by triggering another `dbt run` NOT at a o’clock and re-running `dbt test`. |
+| 2024-09-02 11:35AM | Pablo changes the schedule of `dbt run` and `dbt test` to ensure that the `dbt run` doesn’t clash with Airbyte jobs and that the `dbt test` runs clearly after `dbt run`. |
+| | End of the incident. |
+
+## Root Cause(s)
+
+Some Airbyte jobs were running simultaneously with the scheduled runs of our dbt project. This meant the `dbt run` executions were happening as the `sync` layer of some sources were being populated. Because of this, the `dbt run` wasn’t running on a consistent snapshot of the `sync` layer. This caused referential integrity issues downstream.
+
+## Resolution and recovery
+
+Every day the problem fixed itself for the previous day issues. So, each day we only suffered some row issues from the current day.
+
+The same-day recovery was achieved by simply running `dbt run` when Airbyte jobs were NOT running.
+
+The proper resolution was achieved by re-arranging the schedule of jobs so that `dbt run` does not happen at the same time as Airbyte.
+
+## **Lessons Learned**
+
+- What went well
+ - Our `dbt test` were super useful to spot this happening every day.
+- What went badly
+ - This issue seems to have existed for 6 months. The business impact was tiny, but how long this was alive is highly concerning.
+- Where did we get lucky
+ - We got lucky in the impact being tiny.
+
+More generally, this is a first symptom of our home-made-bash-orchestration starting to show some wrinkles. A more sophisticated orchestration engine should enable us to link together airbyte and dbt executions, which would allow to prevent this kind of issues and also be more smart in our ELT (lower latency, less redundant jobs, reasonable management downstream when something upstream fails, etc).
+
+No need to rush into it, but should be taken into account.
+
+## Action Items
+
+- [ ] Educate the team on scheduling patterns.
+- [ ] Ensure the new orchestration engine deployment gets the right priority.
+
+## Appendix
+
+-
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md:Zone.Identifier b/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240902-01 - Missing payment details in intermedi f2067416c0824fc686513937b3fbca78.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md b/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md
new file mode 100644
index 0000000..ba3da66
--- /dev/null
+++ b/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md
@@ -0,0 +1,57 @@
+# 20240913 Retro
+
+## 🙌 What went well
+
+- **dbt**
+ - Improvement of performance on dbt project, had less incidents when running models.
+ - dbt tests are working great
+ - DBT automatic testing to detect shitty stuff going on
+- **Collaboration with teams**
+ - Quarterly session with Tech team was a great thing to do
+ - Nice discussions / alignments with TMT/Tech leads
+ - Very nice internal Data Team collaboration!!!!
+ - Nice advancements on aligning with Guest Squad (tracking, A/B testing, etc)
+ - Data Comilonas <3
+- **Delivery**
+ - We’re doing great in making the most out of our infra
+ - Very glad to have Cosmos DB integrated into DWH and the following refactors to centralise efforts
+ - Unleashing analytical capabilities with KPIs by Categories
+ - Documentation keeps on being great, we’re being recognized as exemplary on it
+- Probation periods successfully passed!
+- Truvi logo is now readable (and cool)
+
+## 🌱 What needs improvement
+
+- **PBI Awareness/know how**
+ - Lack of knowledge of “this data is available in this report”
+ - Spread more knowledge on (1) what PBIs we have + (2) best ways to use them
+ - I suspect a lot of reports are being super-underused, but we can’t monitor that easily
+ - PBI users need to have more knowledge as to where and how they can get the data they need
+- **Data Contracts/Issues with Tech**
+ - Docs and comms with Tech team are not on their best spot + we might need to raise this more loudly and frequently + no clear visibility on every time tech is impacting data
+ - Shitty stuff happens on Tech deployments:
+ - 27th August Check-out Bookings and Cancelled Bookings fake increase because of PMS issues. The later, the issue still persists
+ - New Dash MVP migration on 10th of September broke the report despite anticipating the changes needed from dev team
+ - Remind development to not change tables or adding test data without our knowledge
+- A bit stressed of not knowing when/if we’ll have a Data Engineer position opening soon
+
+## 💡 Ideas for what to do differently
+
+- Incentivise people to use the Data Request for ad-hoc requests and asking for permissions
+- Explore the possibility to have a “report” to check PBI report usage
+- Have a PBI 101 class for users
+- Since we rarely have incidents, we might need to cause them to grease the groove
+- Open Data Comilonas with different stakeholders and colleagues from time to time (it was nice with Joan!)
+
+## ✔ Action items
+
+- [x] Programar Data Comilona
+- [x] Pablo schedules his Chaos Monkey role
+- [x] Research if there is any better way to monitor PBI report usage
+- [ ] Schedule harsh therapy session with Lou
+- [ ] If Data Engineer vacancy doesn’t progress by end of september, pursue sign-off on consequences
+- [ ] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md:Zone.Identifier b/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240913 Retro f75a7d97742d492fb3587844fa700926.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md b/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md
new file mode 100644
index 0000000..f449077
--- /dev/null
+++ b/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md
@@ -0,0 +1,67 @@
+# 20240913-01 - dbt run blocked by “not in the graph” error
+
+# `dbt run` blocked by “not in the graph” error
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: DWH, dbt
+- Started at: *When did the issue actually start*
+- Detected at: *When did we notice that the incident existed*
+- Mitigated at: *When did we bring things to a stable state without further impact*
+
+We deployed for the first time a version of our `dbt` project that used the versioning features of `dbt`. An active bug on `dbt core` prevented any `dbt` commands like `dbt run` or `dbt test` to work because the compilation of the project would fail. The issue was resolved by applying a somewhat patchy workaround that enables `dbt` to work again properly.
+
+## Impact
+
+None beyond some noise in the alerts channel and making Pablo’s Friday afternoon hectic.
+
+## Timeline
+
+*Keeping it simple on this one since there isn’t much value in tracking stuff in hyper detail.*
+
+All reported times are in CEST Timezone.
+
+| Time | Event |
+| --- | --- |
+| 2024-09-13 15:24 | Pablo merges [PR #2771](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2771) in the dbt project |
+| 2024-09-13 15:25 | Pablo manually triggers the `run_dbt.sh` script in production and the execution fails |
+| 2024-09-13 15:25-15:52 | Pablo scrambles around trying to understand what the heck is happening. |
+| 2024-09-13 15:52 | Pablo manages to get a first successful `dbt run` after applying one of the workarounds suggested by dbt labs |
+| | End of the incident. |
+
+## Root Cause(s)
+
+The root cause is a bug in `dbt` when it attempts to parse and compile the project. The bug is triggered by adding a new version to an existing model that didn’t have versions before. It is unknown at this point if this bug is also triggered when adding an additional version to a model that is already being versioned. The bug is identified by dbt labs and is sitting in this issue in the `dbt core` repository: https://github.com/dbt-labs/dbt-core/issues/8872
+
+Within our platform, the issue was introduced by the following PR in our `dbt` project repository: https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2771. The PR introduced a new version for model `int_core__verification_payments`, which triggered the `dbt core` bug when we started to run `dbt` commands in production.
+
+## Resolution and recovery
+
+I manually modified our deployed `dbt_run.sh` script in production to include a `dbt clean` and `dbt deps` commands before we executed the `run` ones. This deleted the `target` folder and fixed the issue, since this is one of the suggested workarounds. Subsequent executions of the script after this fix ran perfectly fine.
+
+## **Lessons Learned**
+
+What went well:
+
+- NA
+
+What went badly
+
+- This issue already happened in my local (Pablo) some days before, but I dismissed it as some silly flaky behaviour. I guess I probably randomly executed one of the workarounds (running a `dbt clean`) and in the process, accidentally fixed the issue without really understanding what had happened. The lesson here is to not dismiss quirky behaviours in `dbt` and to try to understand them fully (even reproduce them if necessary) so that we can be confident and in control at all times.
+
+Where did we get lucky:
+
+- The fact that the issue was already spotted and documented in the official `dbt core` repository made handling the situation much simpler. Had there been no public showcase of the bug source and workarounds, we would have had a bad time fixing and understanding stuff since it would have required to dive into the internals of `dbt`.
+
+Besides these lessons, I would also suggest this was a great reminder of the fact that the open source tools we rely on are by no means perfect, and that we must be alert when stuff goes south and always consider the option that they have bugs.
+
+## Action Items
+
+- [ ] Judge our options around Blue/Green deployments, which would enable issues like this to happen without making a single scratch in the DWH where consumers are reading from (besides an inevitable delay in the refreshing of data).
+- [x] Track the `dbt` bug (https://github.com/dbt-labs/dbt-core/issues/8872) so that we can adjust our code once it’s fixed
+
+## Appendix
+
+-
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md:Zone.Identifier b/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240913-01 - dbt run blocked by “not in the graph 1030446ff9c980c291f1d57751f443ee.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md b/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md
new file mode 100644
index 0000000..8fa2bd1
--- /dev/null
+++ b/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md
@@ -0,0 +1,66 @@
+# 20240919-01 - dbt test failure because wrong configuration in schema file
+
+# dbt test failure because wrong configuration in schema file
+
+Managed by: Uri
+
+## Summary
+
+- Components involved: data-dwh-dbt-project
+- Started at: 2024-09-18 12:41 CEST
+- Detected at: 2024-09-19 08:43 CEST
+- Mitigated at: 2024-09-19 09:01 CEST
+
+## Summary
+
+A buggy code was commited and merged into master on 18th of September that was unnoticed. In the scheduled production run in the morning of the 19th, dbt test failed because couldn’t compile the test. The fix has been to remove the buggy configuration in the schema entry of core__bookings, merge, re-run dbt test in prod.
+
+## Impact
+
+Not a massive impact because it was a test failing in reporting in `core__bookings` model
+
+## Timeline
+
+- 2024-09-18 12:41 CEST - Faulty commit [923bfa70](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/923bfa70919bf552304150e8cc3ec9af7cdbe708?refName=refs%2Fheads%2Fmaster&path=%2Fmodels%2Freporting%2Fcore%2Fschema.yml&_a=contents) is created
+- 2024-09-18 16:30 CEST - Branch containing the faulty commit in pull request [!2877](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2877?_a=files&path=/models/reporting/core/schema.yml) is merged into production
+- 2024-09-19 08:43 CEST - Data team sees the alert in `#data-alerts` slack channel
+- 2024-09-19 08:48 CEST - Data team accesses the production logs of dbt tests to notice the failure, specifically:
+
+ > Compilation Error in test not_nullgit_core__bookings_id_booking (models/reporting/core/schema.yml)
+ >
+- 2024-09-19 08:51 CEST - The faulty commit [923bfa70](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/923bfa70919bf552304150e8cc3ec9af7cdbe708?refName=refs%2Fheads%2Fmaster&path=%2Fmodels%2Freporting%2Fcore%2Fschema.yml&_a=contents) is spotted. Uri proceeds to create a PR to remove the issue.
+- 2024-09-19 09:00 CEST - The fix is merged in production in commit [feaedb2a](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/feaedb2a06bd37555217ee0fb645c8f5a07b070d?refName=refs%2Fheads%2Fmaster)
+- 2024-09-19 09:01 CEST - Succesful launch of a re-run of the dbt tests with the fixes.
+
+## Root Cause(s)
+
+An involuntary human error modified a line of code in the schema entry of `core__bookings` in the test section for `id_booking`, modifying the `not_null` test to `not_nullgit p`. This change, in the commit [923bfa70](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/923bfa70919bf552304150e8cc3ec9af7cdbe708?refName=refs%2Fheads%2Fmaster&path=%2Fmodels%2Freporting%2Fcore%2Fschema.yml&_a=contents), happened after the review of the data team members on the PR [!2877](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/2877?_a=files&path=/models/reporting/core/schema.yml) once it was approved, thus it went unnoticed.
+
+
+
+## Resolution and recovery
+
+Fix has been straight forward: just change back the `not_nullgit p` to `not_null`. Afterwards, merge into prod and re run the dbt tests successfully.
+
+## **Lessons Learned**
+
+What went well:
+
+- dbt test alerts work well and the team effectively checks the channel once an alert is raised.
+
+What went badly
+
+- Unproper self-review and cross-review of code before merging. Personally, I didn’t check the PR since it was already approved by Pablo. At the same time, this approval came before the faulty commit. We should be all more careful/sceptical when merging into production, specially if we leave an approval in the PR.
+
+Where did we get lucky:
+
+- Minimal impact, it was just a single failing test in reporting schema that would have passed anyway. However, this situation could have been worse if this bug had been in place directly in a model code.
+
+## Action Items
+
+- Tend to review and re-review indistinctly of PRs being already approved.
+- Check commits made after the approval.
+- When merging into prod, run both the normal execution of dbt (`run_dbt.sh`) and the tests (`run_tests.sh`). This would have make this issue appear early
+- Automate CI checks on the dbt project (try to compile the project and perhaps also run tests on every PR, block merging if it doesn’t work)
+
+##
\ No newline at end of file
diff --git a/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md:Zone.Identifier b/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20240919-01 - dbt test failure because wrong confi 1060446ff9c98081896ad46ad0b153e7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md b/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md
new file mode 100644
index 0000000..3302b52
--- /dev/null
+++ b/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md
@@ -0,0 +1,46 @@
+# 20241008 Retro
+
+## 🙌 What went well
+
+- **Team and Jarana**
+ - Data Comilona + Meetup = Nice combo! +1
+ - We survived the period without Ben
+- **Data Platform**
+ - No more dumps <3
+ - First usages of dbt versioning have worked nicely
+ - dbt testing is working nicely so far
+ - Performance and DevEx improvements in both production and dev environments have made things much better
+- **Delivery**
+ - Hubspot integration in progress
+ - Data requests feedback was very positive
+ - KPIs sessions with PMs were quite insightful
+ - Q3 was closed with success and everyone is happy, plans for Q4 are clear and aligned with everyone
+ - Business Overview PBI app looks very good after latest changes from Joaquin
+- **Stakeholders**
+ - New Dash/New Pricing much better communication with Product/Tech teams
+ - Very happy to see that data quality improvements are being prioritised in Guest Squad (GJ completion date)
+
+## 🌱 What needs improvement
+
+- **DE Vacancy**
+ - Data engineer vacancy
+ - Timing for Data Engineer is looking ugly
+- **New dash**
+ - Still it’s quite confusing and time consuming to work on New Dash/New Pricing
+ - New Dash/Tech/Product drama
+- **Other**
+ - Edeposit invoicing misunderstanding
+
+## 💡 Ideas for what to do differently
+
+-
+
+## ✔ Action items
+
+- [ ] If Data Engineer vacancy doesn’t progress by end of september, pursue sign-off on consequences
+- [ ] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md:Zone.Identifier b/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241008 Retro 1190446ff9c9807982abfe76f161994f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md b/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md
new file mode 100644
index 0000000..99d1e82
--- /dev/null
+++ b/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md
@@ -0,0 +1,241 @@
+# 20241104-01 - Booking invoicing incident due to bulk UpdatedDate change
+
+# Booking invoicing incident due to bulk UpdatedDate change
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: Superhog Backend SQL Server, DWH, PBI Reports, `sh-invoicing-exporter` tool
+- Started at: 2024-10-30 12:55:07.653 UTC
+- Detected at: first symptoms noticed around 2024-10-31 08:00:00 UTC, but severity was truly understood on 2024-11-04 10:57:00 UTC
+- Mitigated at: 2024-11-05 10:57:00 UTC
+
+A bulk backfill executed on the application SQL Server database to fix some not-relevant-to-this-incident column resulted in tens of thousands of `VerificationRequest` records having their `UpdatedDate` modified to when the backfill was executed.
+
+A poor assumption in the old dash invoicing logic was severely impacted and caused (1) the billable bookings metrics and reports to be showing utterly wrong data for 6 days and (2) a delay of ~27 hours in the delivery of the old dash invoicing exports to the finance team for the October ‘24 period.
+
+A backup of our SQL Server was restored and the incident-triggering changes were reverted in an emergency to unblock the generation of the invoicing exports. The root issue still exists and needs to be addressed.
+
+## Impact
+
+- On the invoicing process:
+ - The start of the old dash invoicing process, which is the generation of exports run by Pablo, should have started on 2024-11-04 08:00:00 UTC. Instead, it started on 2024-11-05 10:57:00 UTC, adding a delay of ~27 hours to all depending Finance team processes.
+- On reporting:
+ - Since 2024-10-30 12:55:07.653 UTC and until 2024-11-05 10:20:00 UTC.
+ - The DWH table `int_core__booking_charge_events` was displaying tens of thousands of billable bookings on 2024-10-30 that were wrong. This propagated through other DWH tables and finally to reports. contained a grossly wrong figure for October ‘24.
+ - The Business Overview > Main KPIs report showed a grossly wrong count of invoiceable bookings for October (somewhere x2-3 orders of magnitude what the number should have been).
+ - The Business Overview > Host Fees report show inflated numbers for the Superhog-inferred billable bookings count and booking fees revenue for October ‘24 (somewhere x2-3 orders of magnitude what the number should have been).
+- On the Guest Squad efforts,
+ - Our mitigation solution of reverting the changes made on the `UpdatedDate` might have cause more troubles: the SQL Server is now *lying*, since the `PaymentValidationSetId` values of many records *were* updated on 2024-10-30 12:55:07 UTC, but the `UpdatedDate` values of those records are now effectively lying. I’m not aware of how this may cause further problems, but it could.
+
+## Timeline
+
+All times are UTC.
+
+| Time | Event |
+| --- | --- |
+| 2024-10-30 12:55:07 | The bulk update script for the `PaymentValidationSetId` column on the table `VerificationRequest` gets executed, changing the `UpdatedDate` value of tens of thousands of records. |
+| 2024-10-31 06:28:00 | An automated outliers data alert gets triggered due to the wild variance in the Estimated Billable Bookings KPI. |
+| 2024-10-31 07:49:00 | Uri notices the issue (leaving a note in the data alerts chat) and correctly spots the fact that there is a spike of booking charge events on the date 2024-10-30. |
+| 2024-10-31 08:30:00 | The alert gets discussed during the Data Team’s daily call. Pablo wrongly judges that the `UpdatedDate` data shouldn’t cause an issue in invoicing and it’s just a minor KPI blip that can be fixed in the future, and the team decides that the alert is not urgent. |
+| 2024-11-04 08:30:00 | The Data team discusses again this topic in the daily call. The fact that it’s an invoicing exports day increases attention, and upon looking into some details, the team switches his mind and realises there might be serious implications for invoicing. |
+| 2024-11-04 09:30:00 | Ben C. inquires the Data team about the report in Business Overview > Host Fees > Booking Fees - Superhog is showing some wildly high numbers for October. |
+| 2024-11-04 10:51:00 | After some detailed research, Pablo realises that the invoicing imports are broken and starts the #invoicing-firefightning slack channel to gather stakeholders. |
+| 2024-11-04 11:09:00 | Pablo and Ben R. discuss about the issue and assess the option of switching the invoicing code to rely on `LinkUsed` instead of `UpdatedDate`. They agree on Pablo examining if that would do the trick. |
+| 2024-11-04 12:17:00 | Pablo concludes that the naive `LinkUsed` option won’t do the trick due to how data looks in `VerificationRequest`, and comes back to Ben R. to discuss how to proceed. They agree to, instead, restore the original values of the `UpdatedDate` columns in the records that were updated on 2024-10-30 12:55:07. |
+| 2024-11-04 12:23:00 | Ben R. starts restoring a database backup to restore the records. |
+| 2024-11-04 15:58:00 | Since the restore is taking longer than expected, Ben R. proposes running a simpler update by leveraging some fields in `VerificationRequest`, but Pablo points out that a partial solution won’t help the Finance team since running the exports multiple times would mean Finance’s manual work is only useful after the final export. |
+| 2024-11-05 08:00:00 | After a first failed restores, the second backup restore works on SQL Server. |
+| 2024-11-05 9:18:00 | Ben uses the restored data to revert the `UpdatedDate` changes in the records that were modified on 2024-10-30. |
+| 2024-11-05 10:20:00 | Pablo starts a backfill in Airbyte and a dbt run right after to propagate the new updated records throughout the DWH. |
+| 2024-11-05 10:54:00 | Pablo confirms that the large cluster of bookings attribute to October is not there anymore, as well as that the downstream reporting shows correct figures again. |
+| 2024-11-05 10:57:00 | Pablo triggers the export of the invoicing reports for the October period. |
+| 2024-11-05 14:53:00 | The exports finish successfully and Pablo shares them with Jamie D. |
+| | Incident mitigated. |
+
+## Root Cause(s)
+
+The root cause is a combination of a:
+
+- A poorly-chosen assumption in the old dash invoicing logic (the usage of `UpdatedDate` field in the `VerificationRequest` table to decide in which month should a booking be charged for its booking fee when it is supposed to be charged on `VerificationStartDate`).
+ - ([see these lines](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=336&lineEnd=339&lineStartColumn=25&lineEndColumn=26&lineStyle=plain&_a=contents) in the latest release of `sh-invoicing-exporter`, which are the conceptual grand-childen of [these lines](https://guardhog.visualstudio.com/Superhog/_git/superhog-invoicing-console-app?path=/SuperhogInvoicing/SQLQueries.cs&version=GBmaster&line=159&lineEnd=165&lineStartColumn=3&lineEndColumn=6&lineStyle=plain&_a=contents) in the old C# sharp script, to understand the faulty assumption)
+ - This is a conceptual problem that we still need to address if we want to prevent significant issues in future invoicing cycles. Our initial mitigation was treating symptoms, not the core issue.
+- An out of BAU bulk update in the `VerificationRequest` table in the backend SQL Server. I would like to make clear that the intent of this bulk update was perfectly legitimate and its execution was also proper. Even though it’s side effects have been troublesome, the update itself was not an issue nor a mistake.
+
+So, the true issue is the troublesome reliance of the invoicing code on `UpdatedDate` , which is always a tiny issue, but turns into a massive one anytime any tech squad in Superhog performs an update on the `VerificationRequest` table that goes beyond the usual activity of the application. Given that this out of the usual operations will keep on happening in the future, it is important that we address the true issue to avoid more incidents like this one in the future.
+
+## Resolution and recovery
+
+The problem was mitigated by reverting the changes made in the `UpdatedDate` through the restoring of a backup of the SQL Server database and some adhoc script being run on the production database.
+
+This allowed us to bring reporting back to normal and continue the invoicing exports, at the expense of leaving the SQL Server database in an inconsistent state.
+
+The true solution to the problem is still unaddressed (see the root cause section).
+
+## **Lessons Learned**
+
+*List of knowledge acquired. Typically structured as: What went well, what went badly, where did we get lucky*
+
+- What went well
+ - Automated KPI outlier tests from the Data team brought the spike of billable bookings into Uri’s attention.
+ - The production backups of the SQL Server database allowed us to restore the original `UpdatedDate` values, providing us a fast way to unblock the invoicing process.
+- What went badly
+ - Even though we got an early alert, Pablo wrongly triaged the unusually high number of billable bookings as a minor issue that wouldn’t impact the invoicing process.
+ - The faulty logic/assumption to place the invoicing on bookings in time has been sitting around for years, undocumented. We don’t know have any trace of why we built it this way in the first place.
+ - The faulty logic/assumption to place the invoicing on bookings in time might have been leading to wrongly placement-in-time of bookings fees for a long time, but the high complexity of the logic and the way the history of records is managed in the database make it very hard to understand what is the true extend of the issue.
+ - Obtaining a restore of the production database can take multiple hours.
+ - We screwed up with the consistency of data in `VerificationRequest`. Tens of thousands of `UpdatedDate` values in the `VerificationRequest` table are now wrong.
+
+## Action Items
+
+- [ ] Identify all business logic which is now relying on the `UpdatedDate` field of the `VerificationRequest` table in the SQL Server table.
+- [ ] Once the above logic is catalogued, apply changes and fixes so that `UpdatedDate` can be modified without causing incidents.
+- [ ] Potentially, extend the exercise beyond `VerificationRequest`, since the same problem pattern could apply to all sorts of update-able tables in the SQL Server database.
+
+## Appendix
+
+*Miscellanea corner for anything else you might want to include*
+
+- Link to first notes when we started tackling the issue: [20241101 - Invoicing UpdateDate mess up](https://www.notion.so/20241101-Invoicing-UpdateDate-mess-up-1340446ff9c980b2926fc6284572f740?pvs=21)
+- Code for the bulk update script executed on 2024-10-30
+
+ ```sql
+ DECLARE @CurrentDate AS DATETIME = GETDATE()
+
+ SELECT
+ [pvs].[VerificationRequestId]
+ , [vr].[GuestJourneyCompletedDate]
+ , [vr].[ExpiryDate]
+ , [vr].[PaymentValidationSetId]
+ , [pvs].[PaymentValidationSetId]
+ FROM
+ (
+ SELECT [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ FROM
+ (
+ SELECT
+ [vr].[Id] AS [VerificationRequestId]
+ , COALESCE(
+ [vr].[OverridePaymentValidationSetId],
+ [a].[PaymentValidationSetId],
+ [pvs_a].[Id],
+ [pvs_d].[Id]
+ ) AS [PaymentValidationSetId]
+
+ FROM [dbo].[VerificationRequest] [vr]
+ -- Listing Override
+ LEFT JOIN [dbo].[Booking] [b] ON [b].[VerificationRequestId] = [vr].[Id]
+ LEFT JOIN [dbo].[Accommodation] [a] ON [a].[AccommodationId] = [b].[AccommodationId]
+ -- Account Override
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_a] ON [pvs_a].[SuperhogUserId] = [vr].[CreatedByUserId] AND [pvs_a].[IsCustom] = 0 AND [pvs_a].[IsActive] = 1
+ -- Default
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_d] ON [pvs_d].[SuperhogUserId] IS NULL AND [pvs_d].[IsCustom] = 0 AND [pvs_d].[IsActive] = 1
+ ) [pvs]
+ GROUP BY [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ ) [pvs]
+ LEFT JOIN [dbo].[VerificationRequest] [vr] ON [vr].[Id] = [pvs].[VerificationRequestId]
+ LEFT JOIN [dbo].[user] [u] ON [u].[Id] = [vr].[SuperhogUserId]
+ LEFT JOIN [dbo].[Country] [co] ON [co].[Id] = [u].[BillingCountryId]
+ LEFT JOIN [dbo].[Currency] [cu] ON [cu].[Id] = [co].[PreferredCurrencyId]
+ LEFT JOIN [dbo].[PaymentValidationSetToCurrency] [pvstc] ON [pvstc].[PaymentValidationSetId] = [pvs].[PaymentValidationSetId] AND [pvstc].[CurrencyIso] = [cu].[IsoCode]
+
+ --WHERE [VerificationRequestId] = 913616
+ WHERE [vr].[GuestJourneyCompletedDate] IS NULL
+ and [vr].[PaymentValidationSetId] IS NULL
+ and [vr].[ExpiryDate] >= GETDATE()
+
+ ---and [VerificationRequestId] = 913616
+
+ BEGIN TRAN
+
+ UPDATE [vr]
+
+ SET
+ [PaymentValidationSetId] = [pvs].[PaymentValidationSetId]
+ , [UpdatedDate] = @CurrentDate
+
+ FROM
+ (
+ SELECT [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ FROM
+ (
+ SELECT
+ [vr].[Id] AS [VerificationRequestId]
+ , COALESCE(
+ [vr].[OverridePaymentValidationSetId],
+ [a].[PaymentValidationSetId],
+ [pvs_a].[Id],
+ [pvs_d].[Id]
+ ) AS [PaymentValidationSetId]
+
+ FROM [dbo].[VerificationRequest] [vr]
+ -- Listing Override
+ LEFT JOIN [dbo].[Booking] [b] ON [b].[VerificationRequestId] = [vr].[Id]
+ LEFT JOIN [dbo].[Accommodation] [a] ON [a].[AccommodationId] = [b].[AccommodationId]
+ -- Account Override
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_a] ON [pvs_a].[SuperhogUserId] = [vr].[CreatedByUserId] AND [pvs_a].[IsCustom] = 0 AND [pvs_a].[IsActive] = 1
+ -- Default
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_d] ON [pvs_d].[SuperhogUserId] IS NULL AND [pvs_d].[IsCustom] = 0 AND [pvs_d].[IsActive] = 1
+ ) [pvs]
+ GROUP BY [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ ) [pvs]
+ LEFT JOIN [dbo].[VerificationRequest] [vr] ON [vr].[Id] = [pvs].[VerificationRequestId]
+ LEFT JOIN [dbo].[user] [u] ON [u].[Id] = [vr].[SuperhogUserId]
+ LEFT JOIN [dbo].[Country] [co] ON [co].[Id] = [u].[BillingCountryId]
+ LEFT JOIN [dbo].[Currency] [cu] ON [cu].[Id] = [co].[PreferredCurrencyId]
+ LEFT JOIN [dbo].[PaymentValidationSetToCurrency] [pvstc] ON [pvstc].[PaymentValidationSetId] = [pvs].[PaymentValidationSetId] AND [pvstc].[CurrencyIso] = [cu].[IsoCode]
+
+ WHERE [vr].[GuestJourneyCompletedDate] IS NULL
+ and [vr].[PaymentValidationSetId] IS NULL
+ and [vr].[ExpiryDate] >= GETDATE()
+
+ --and [VerificationRequestId] = 913616
+
+ SELECT
+ [pvs].[VerificationRequestId]
+ , [vr].[GuestJourneyCompletedDate]
+ , [vr].[ExpiryDate]
+ , [vr].[PaymentValidationSetId]
+ , [pvs].[PaymentValidationSetId]
+ FROM
+ (
+ SELECT [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ FROM
+ (
+ SELECT
+ [vr].[Id] AS [VerificationRequestId]
+ , COALESCE(
+ [vr].[OverridePaymentValidationSetId],
+ [a].[PaymentValidationSetId],
+ [pvs_a].[Id],
+ [pvs_d].[Id]
+ ) AS [PaymentValidationSetId]
+
+ FROM [dbo].[VerificationRequest] [vr]
+ -- Listing Override
+ LEFT JOIN [dbo].[Booking] [b] ON [b].[VerificationRequestId] = [vr].[Id]
+ LEFT JOIN [dbo].[Accommodation] [a] ON [a].[AccommodationId] = [b].[AccommodationId]
+ -- Account Override
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_a] ON [pvs_a].[SuperhogUserId] = [vr].[CreatedByUserId] AND [pvs_a].[IsCustom] = 0 AND [pvs_a].[IsActive] = 1
+ -- Default
+ LEFT JOIN [dbo].[PaymentValidationSet] [pvs_d] ON [pvs_d].[SuperhogUserId] IS NULL AND [pvs_d].[IsCustom] = 0 AND [pvs_d].[IsActive] = 1
+ ) [pvs]
+ GROUP BY [pvs].[VerificationRequestId], [pvs].[PaymentValidationSetId]
+ ) [pvs]
+ LEFT JOIN [dbo].[VerificationRequest] [vr] ON [vr].[Id] = [pvs].[VerificationRequestId]
+ LEFT JOIN [dbo].[user] [u] ON [u].[Id] = [vr].[SuperhogUserId]
+ LEFT JOIN [dbo].[Country] [co] ON [co].[Id] = [u].[BillingCountryId]
+ LEFT JOIN [dbo].[Currency] [cu] ON [cu].[Id] = [co].[PreferredCurrencyId]
+ LEFT JOIN [dbo].[PaymentValidationSetToCurrency] [pvstc] ON [pvstc].[PaymentValidationSetId] = [pvs].[PaymentValidationSetId] AND [pvstc].[CurrencyIso] = [cu].[IsoCode]
+
+ --WHERE [VerificationRequestId] = 913616
+ WHERE [vr].[GuestJourneyCompletedDate] IS NULL
+ and [vr].[PaymentValidationSetId] IS NULL
+ and [vr].[ExpiryDate] >= GETDATE()
+
+ --and [VerificationRequestId] = 913616
+
+ ROLLBACK TRAN
+ --COMMIT TRAN
+ ```
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md:Zone.Identifier b/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241104-01 - Booking invoicing incident due to bu 82f0fde01b83440e8b2d2bd6839d7c77.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md b/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md
new file mode 100644
index 0000000..3e8bb8d
--- /dev/null
+++ b/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md
@@ -0,0 +1,60 @@
+# 20241112 Retro
+
+## 🙌 What went well
+
+- **Incident Mgmt**
+ - Outlier tests in Main KPIs work surprisingly well
+ - Problem detection and resolutions
+ - We keep on spearheading incident management and doing things right
+ - Incidents are finally hurting enough for TMT to pay (some) attention
+- **Deliveries**
+ - KPIs refactor, including daily modelisation +1
+ - Integration of Hubspot into DWH
+ - Account Managers report (prev. Top Losers) being extremely useful and used by RevOps teams
+ - Churn rate metrics computation
+ - Starting Guest KPIs in new KPI modelisation
+ - GUEST TAXES CROSSCHECK FINISHED (AT LAST)
+- Last Comilona was AMAZING
+- Domain Analysts advancing well +1
+- GJ A/B test alignment sessions
+- Guest squad is doing the Lord’s work
+- Not impacted by layoffs
+
+## 🌱 What needs improvement
+
+- **Incidents**
+ - Persistent bugs in BookingToProductBundle
+ - Old invoicing incident
+ - + generally a lot of incidents all over the place
+- **People**
+ - Layoffs communication sourness
+ - Lou D. leaving us
+- **Priorities and planning**
+ - Tons of unplanned work - delaying other deliverables for Q4 +2
+ - Q1 company priorities still mostly focus on deliver new stuff rather than fixing core business
+ - + general misalignment between TMT and boots on the ground
+- Data Engineer vacancy not filled by now clearly impacting Q1 +2
+- Some Data Requests do not reach the channel, needs investigation
+- General doomloop sourness around New Dash, with no light at end of tunnel
+
+## 💡 Ideas for what to do differently
+
+- New Dash retrospective with PMs/Dash Squad/Data by the EOY
+- CI/CD checks on DWH complete PR button to ensure branch is up-to-date with master branch
+- Modify data captain distribution
+- Include Tech Team in data alerts channel and tag them
+- Propose and discuss how to align with Tech team to avoid context switching and optimise time and effort
+
+## ✔ Action items
+
+- [ ] If Data Engineer vacancy doesn’t progress by end of september, pursue sign-off on consequences
+ - [ ] Reassess DE plans
+- [ ] Discuss and agree with Tech team on data-alerts onboarding (should they be there? who should we tag?)
+- [ ] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
+- [ ] Azure DevOps checks on DWH complete PR button to ensure branch is up-to-date with master branch
+- [ ] Discuss with Ben C. New Dash retrospective with PMs/Dash Squad/Data by the EOY
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md:Zone.Identifier b/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241112 Retro 13c0446ff9c980b0a942d10a7c68583c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md b/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md
new file mode 100644
index 0000000..ed27d28
--- /dev/null
+++ b/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md
@@ -0,0 +1,77 @@
+# 20241119-01 - CheckIn Cover multi-price problem (again)
+
+# CheckIn Cover multi-price problem (again)
+
+Managed by: Pablo
+
+## Summary
+
+- Components involved: SQL Server, DWH, superhog-mono-app codebase
+- Started at: 2024-11-18 12:49:06 UTC
+- Detected at: 2024-11-19 06:34:16 UTC
+- Mitigated at: 2024-11-19 17:30:00 UTC
+
+A new stored procedure released on 2024-11-18 mistakenly added records in `live.dbo.PaymentValidationSetToCurrency` with values `0` for CIH prices and covers. This caused a dimensionality issue in the DWH, which lead to duplicate records in DWH and bogus reporting for CIH in PBI, with inflated sales numbers and other affected data points. Besides that, a seeding script from the application that doesn’t respect the `UpdatedDate` column of `live.dbo.PaymentValidationSetToCurrency` caused data drift between SQL Server and DWH, which increased investigation complexity and generated the need for backfills.
+
+This incident is a very close reoccurrence of this one from June: [20240619-01 - CheckIn Cover multi-price problem](20240619-01%20-%20CheckIn%20Cover%20multi-price%20problem%20fabd174c34324292963ea52bb921203f.md). The underlying design mistakes that act as a root cause are common across both incidents.
+
+## Impact
+
+CIH reporting in the DWH has been displaying incorrect figures for 11 hours. This includes data such as revenue totals, sales counts, funnel and conversion rates metrics, and individual sales records displaying wrong prices.
+
+## Timeline
+
+All times are UTC.
+
+| Time | Event |
+| --- | --- |
+| Sometime before 2024-11-18 12:49:06 | A release was made on the Superhog backend, which added the migration `202411121235595_CreateCustomBundle.cs` |
+| 2024-11-18 12:49:06 | Faulty records with `0` value for CIH price and cover got added to `live.dbo.PaymentValidationSetToCurrency`. We suspect they were added by the stored procedure `CreateCustomBundle`. |
+| 2024-11-18 13:00:10 | One of the hourly Airbyte jobs that syncs between SQL Server and the DWH caught the faulty records and copied them over into the DWH. |
+| At some unknown time between 2024-11-18 13:00:10 and 2024-11-19 06:15:00 | The seeding script for CIH prices and covers runs in SQL Server, overriding the faulty records with `0` values. |
+| 2024-11-19 06:15:00 | A `dbt run` was triggered, propagating the faulty records in downstream models and breaking the granularity of some models with duplicate record. From this point on, data in the DWH and the reading PBI reports was wrong. |
+| 2024-11-19 06:34:16 | A data test was triggered due to duplicate records in `reporting.core__vr_checkin_cover` breaking the PK. Data team starts investigating. |
+| 2024-11-19 14:00:00 | Pablo realises the issue looks like a duplicate of [20240619-01 - CheckIn Cover multi-price problem](20240619-01%20-%20CheckIn%20Cover%20multi-price%20problem%20fabd174c34324292963ea52bb921203f.md). This drives him to quickly spot and confirm the data drift and the faulty records. |
+| 2024-11-19 15:30:00 | Pablo discusses with Lawrence and the root cause of the issue is identified. |
+| 2024-11-19 17:30:00 | An Airbyte + dbt backfill to fix the data drift and remove the faulty records finishes. From this point on, data in the DWH and PBI is correct again. |
+| | Incident mitigated. |
+
+## Root Cause(s)
+
+The root cause is a combination of the following:
+
+- The true, core root cause is that business logic for CIH across the company assumes that CIH has a single, global price across all Superhog for each currency. Despite this, the database actually allows for different prices per platform user. This design is not fit for our business logic and allows for incidents like this to happen. Should this be redesigned to properly reflect our business logic, neither this incident nor [20240619-01 - CheckIn Cover multi-price problem](20240619-01%20-%20CheckIn%20Cover%20multi-price%20problem%20fabd174c34324292963ea52bb921203f.md) would have happened.
+- In the case of this incident, the trigger of the issue was that the uniqueness of price values per currency in `live.dbo.PaymentValidationSetToCurrency` was not respected by the stored procedure `202411121235595_CreateCustomBundle.cs`, which set the values for CIH prices and covers of some accounts to `0`.
+- This cascaded into breaking the uniqueness of the primary key of table `dwh.intermediate.int_core__check_in_cover_prices` in the DWH, which led to duplicate records in downstream tables related to CIH, and to wrong data being displayed in PBI reports.
+- Besides that, a seeding script that updates CIH price and cover values ran on top of `live.dbo.PaymentValidationSetToCurrency`, overriding prices without respecting the `UpdatedDate` column. This caused data drift across the DWH and SQL Server.
+
+## Resolution and recovery
+
+The short mitigation consisted on:
+
+- The wrong, `0` valued records in `live.dbo.PaymentValidationSetToCurrency` where accidentally reverted back to their proper prices.
+- Performing a backfill of the table `PaymentValidationSetToCurrency` on Airbyte so that the `sync` layer table would stop having duplicated prices.
+- Execute a `dbt run` on the DWH to propagate the fixed data.
+
+## **Lessons Learned**
+
+- What went well
+ - Automated data alerts in DWH helped us notice the incident fast.
+ - The post-mortem from the previous incident accelerated a lot investigation and resolution. It made it easy to understand what was happening and fix it, even though the incident is rather tricky as it has many moving parts.
+- What went badly
+ - Our inadequate design for the CIH logic in the backend keeps biting us back.
+ - The complexity and shared boundaries across squads are causing us to step on each others toes (a change made by the new dash squad changes behaviours on the domain of the guest squad in an uncontrolled way).
+ - We didn’t take action from the stuff we learned from the previous incident of this type back in June, and so the issues keep on appearing.
+- Where did we get lucky
+ - The CIH prices seeding script fixed the wrong values inserted by the new migration added by the Dash Squad. We removed the wrong values due to sheer luck.
+
+## Action Items
+
+- [ ] Fix the stored procedure `CreateCustomBundle` defined in the migration `202411121235595_CreateCustomBundle.cs` so that it stops creating `PaymentValidationSetToCurrency` records with prices different that the canonical ones.
+ - The exact lines that cause the issue [can be found here](https://guardhog.visualstudio.com/Superhog/_git/superhog-mono-app?path=/Guardhog.Data/StoredProcedures/CreateCustomBundle/202411121235595_CreateCustomBundle.cs&version=GBdevelop&line=170&lineEnd=171&lineStartColumn=4&lineEndColumn=29&lineStyle=plain&_a=contents)
+- [ ] Modify the CIH prices seeding script so that it respects the `UpdatedDate` column, preventing future data drifts.
+- [x] Add more specific data tests in the DWH to spot this issue faster (we can add a test that is still not there and that would give away that this issue is happening instantly)
+
+## Appendix
+
+Link to the previous occurrence of this issue: [20240619-01 - CheckIn Cover multi-price problem](20240619-01%20-%20CheckIn%20Cover%20multi-price%20problem%20fabd174c34324292963ea52bb921203f.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md:Zone.Identifier b/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241119-01 - CheckIn Cover multi-price problem (a 1430446ff9c98088b547dfb0baff6024.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md b/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md
new file mode 100644
index 0000000..17f88ba
--- /dev/null
+++ b/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md
@@ -0,0 +1,67 @@
+# 20241210 Retro
+
+## 🙌 What went well
+
+- **Delivery**
+ - Good advance with KPIs reporting
+ - A/A test showed there were some improvements needed in the tracking and Guest Squad quickly fixed it
+ - Check-in hero & S&P integrations to DWH went smoothly
+ - Migration of Athena CosmosDB container went smoothly
+ - You guys (J&U) have been rocking it
+- **Methodology**
+ - Quiet days are being helpful. More?
+ - Documentation keeps helping us a lot
+ - dbt docs working perfectly
+ - Anaxi and Infra documentation is absolutely perfect for dumb analysts like Uri to handle actual DE work effectively
+ - Tagging unplanned work in a dedicated epic to wrap up EOQ
+ - Very good collaboration with APIs and Guest Squads
+- **Fun**
+ - Cube and rollup workshop in love
+ - EOY social activities (Quiz, Escape Room, Dinner)
+ - Glad to see some upcoming python usage
+ - And notebooks are very cool (used properly)
+- Domain analysts program +1
+
+## 🌱 What needs improvement
+
+- **Methodology**
+ - Huge amounts of unplanned work - not being able to fully reach Q4 objectives
+ - Very reactive quarter: spinning from one fire to the next
+- **Keeping things tidy**
+ - Going crazy with PBI permissions: invisible tangle of who has access where
+ - Having a better way to follow up on the usage of reports to see which ones are relevant
+ - We are not doing great with cleaning up old stuff, tends to stick around permanently
+- Tech change management from MS Server changes side
+- New Dash reporting is still facing data quality issues from the source and it’s not prioritised
+- **Cachondeito**
+ - Moar comilonas.
+ - Kind of miss the office (an office and the bars, not Norssken specially)
+
+## 💡 Ideas for what to do differently
+
+- Change planning and organization to be a tad less reactive, have more time to tidy things up? Shape up? Change Data Captain role?
+- Kill SH legacy reporting to avoid confusion on KPIs
+- More syncs with RevOps (we do tons with Product, a bit with Finance, RevOps is the long forgotten son)
+- Put some order in our Notion
+- Make retros 2h
+
+## ✔ Action items
+
+- [x] If Data Engineer vacancy doesn’t progress by end of september, pursue sign-off on consequences
+ - [x] Reassess DE plans
+- [ ] Retro with Ben C. around planning practices (centralization is not working, changing scopes too fast, etc).
+ - [ ] Quiet Tuesdays and Quiet Thursdays
+ - [ ] Move calendar recurring meetings
+ - [ ] Give Ben C. a heads-up
+- [ ] Read Shape-up ([https://basecamp.com/shapeup/](https://basecamp.com/shapeup/)) and discuss next retro
+- [ ] Fuse Comilona and Retro and schedule for Monday 13/01 and make retros loooonger (2H)
+- [ ] Sketch roughly formalization of Domain Analysts programme
+- [ ] Discuss and agree with Tech team on data-alerts onboarding (should they be there? who should we tag?)
+- [ ] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
+- [ ] Azure DevOps checks on DWH complete PR button to ensure branch is up-to-date with master branch
+- [ ] Discuss with Ben C. New Dash retrospective with PMs/Dash Squad/Data by the EOY
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md:Zone.Identifier b/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241210 Retro 1580446ff9c9803ea397d22f31bade85.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md b/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md
new file mode 100644
index 0000000..54345bd
--- /dev/null
+++ b/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md
@@ -0,0 +1,79 @@
+# 20241211-01 - DWH scheduled execution has not been launched
+
+# DWH scheduled execution has not been launched
+
+Managed by: Uri and Pablo
+
+## Summary
+
+- Components involved: Airbyte VM, Airbyte, dbt, xexe, anaxi, DWH
+- Started at: 2024-12-11 05:00:00 UTC
+- Detected at: 2024-12-11 07:41:00 UTC
+- Mitigated at: 2024-12-11 09:48:00 UTC
+
+An out of the ordinary resource consumption by Airbyte has left the Airbyte VM knocked down for 5 hours due to lack of memory. Jobs that run on that machine by various of our data platform components didn’t run. We rebooted the machine and re-run all pending work.
+
+## Impact
+
+The nightly loading and refreshing of data in DWH has been delayed by about 5 hours. This means reporting was stale for business users for around 3 hours during working time (assuming nobody looks at PBI reports at 6AM. Maybe Joan?).
+
+## Timeline
+
+All times are UTC.
+
+| Time | Event |
+| --- | --- |
+| 2024-12-11 04:00:00 | The Airbyte VM jumps from having 1.5GB of free memory to almost none (~50 MB). CPU usage also picks up from ~0% to ~50% and stays stuck there.
+
+A sync job for the stream SQL Server incremental to DWH starts (job ID: 19235), but communication with the worker container is lost at 04:01:16.
+
+A sync job for the stream SQL Server full refresh to DWH starts (job ID: 19237), but communication with the worker container is lost at 04:01:16.
+
+A sync job for the stream Stripe UK to DWH starts (job ID: 19236), but communication with the worker container is lost at 04:01:06. |
+| 2024-12-11 04:01:17 | Airbyte jobs scheduled to begin from this point in time onwards do not start due to lack of resources.
+
+All cron jobs on the machine after this point in time do not start due to lack of resources. This includes dbt, anaxi and xexe jobs. |
+| 2024-12-11 07:41:00 | Uri notices Main KPIs are not updated with 10th December data. After checking Data Alerts, no alert has been raised. After checking Data Receipts, Uri confirms that the expected scheduled run has not been executed. |
+| 2024-12-11 07:43:00 | A message in the Data channel is sent to notify users of an ongoing incident. |
+| 2024-12-11 07:50:00 | Uri tries to connect to the SH Data Airbyte machine unsuccessfully. |
+| 2024-12-11 07:56:00 | Something happened around 4AM UTC since Airbyte resource consumption has fallen to a minimum and stagnated. Checking the behavior on previous days, this looks out of the ordinary. |
+| 2024-12-11 08:07:00 | Looks like it’s a networking issue, but cannot be 100% sure. Uri suggests restarting Airbyte machine, but might not be the best approach. Waiting for Pablo since he’s the expert. |
+| 2024-12-11 08:30:00 | Pablo comes in and looks at the situation. He identifies the lack of available RAM memory in the Airbyte VM and assumes that Airbyte has consumed all available resources and locked the VM in doing so. |
+| 2024-12-11 08:47:00 | Pablo triggers a reboot of the Airbyte VM, which completes successfully in a couple of minutes. Memory gets freed as part of it and the VM and container services become reactive once again. |
+| 2024-12-11 08:49:00 | Multiple Airbyte jobs start again to catch up with the missed runs. |
+| 2024-12-11 09:36:00 | Pablo starts triggering missed xexe, anaxi and dbt jobs. |
+| 2024-12-11 10:25:00 | All due jobs are completed and the DWH state is up to date. |
+| | End of mitigation |
+
+## Root Cause(s)
+
+Multiple Airbyte jobs got triggered to run at 04:00:00 UTC. It seems the workload produced by the data volume on 2024-11-12 was enough to chew all RAM in the VM and bring it to a deadlocked state. This chained into all jobs running on the VM (Airbyte, dbt, anaxi and xexe) not working until mitigation was put in place.
+
+## Resolution and recovery
+
+We brought things back to normal by rebooting the Airbyte VM so that the machine would stop being deadlocked.
+
+Some pending jobs started themselves. Others were triggered manually.
+
+## **Lessons Learned**
+
+- What went well
+ - Azure dashboards allowed us to identify the resource bottleneck easily.
+ - Team is alert and notices fishy behaviours fast, even when there are no alerts.
+ - Our logs allowed to understand nicely what ran and what didn’t.
+- What went badly
+ - We almost forgot to re-run xexe and anaxi job.
+ - We had no alerts. We aren’t testing for freshness the right away, so DWH can go stale without warning.
+- Where did we get lucky
+ - We got lucky in this not happening before.
+
+## Action Items
+
+- [x] Change current schedules in Airbyte to avoid the 04:00 AM memory usage peak.
+ - Stripe UK and Superhog full refresh have been shifted by a few minutes (25 and 35 later than current schedule.)
+- [ ] Discuss the implementation of dbt source freshness tests.
+- [ ] Research ways to prevent Airbyte from sucking up all available memory, or at least notify when it happens.
+
+## Appendix
+
+-
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md:Zone.Identifier b/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241211-01 - DWH scheduled execution has not been 1590446ff9c9806086e0ec77336d4c51.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md b/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md
new file mode 100644
index 0000000..42b1533
--- /dev/null
+++ b/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md
@@ -0,0 +1,49 @@
+# 20241217 - Long-term Data topics with Rich
+
+We sit to discuss on 2024-12-17.
+
+# Recap on what the Data team does
+
+What we should do:
+
+- Supervise data across the org: catalogue all the data and data products we have
+- Build and own company wide Data infrastructure
+- Build and own stable, company wide reports, dashboards, etc.
+- Provide brain power for complex analysis
+- Build Data Literacy across the company
+
+What we shouldn’t do:
+
+- Own absolutely every little report that exists in Superhog (and become a bottleneck in doing so)
+- Act as a poorly designed patch/release-valve on other product shortcomings
+- Miraculously overcome lack of data and poor-quality data
+
+# Recap on our capabilities
+
+[Check this whiteboard](https://guardhog-my.sharepoint.com/:wb:/g/personal/pablo_martin_superhog_com/ERy6GpPt0S9Ht_vhl6jF8EsBWE9fzDX8DIv7vQv3whNo7A?e=pHw8C9).
+
+# Long-term topics
+
+Topics:
+
+- Grow team to sufficient size to increase bus factor
+- Implant analyst roles within business functions
+- Long-term capabilities that we lack:
+ - Enable application runtimes to access DWH data
+ - Embedding data products within customer-facing applications
+ - Advanced orchestration of workloads
+ - Automated, one off, file based reports to internal and external users
+ - Full end-to-end lineage from data sources to data products + tracking of data products usage
+- Improve relationship with stakeholders:
+ - Mature data contracts approach with upstream teams
+ - Mature tracking and communication with end-consumers
+ - Improve priorities setting with the business
+- Keep on improving analyst experience to maximize productivity/avoid going into maintenance hell
+- Transfer out accumulated responsibilities that make no sense (invoicing and other shadow-product-engineering areas)
+
+Stuff to do to achieve the above:
+
+- Hire a DE
+- Migrate from PBI into a better tool
+- Carefully add new tooling to the Data Platform
+- Build SOP with other teams for upstream/downstream relationships
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md:Zone.Identifier b/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241217 - Long-term Data topics with Rich 15f0446ff9c980bfb932ee563ba1b25e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md b/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md
new file mode 100644
index 0000000..94ae0c5
--- /dev/null
+++ b/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md
@@ -0,0 +1,32 @@
+# 20241218 - Ways of working with Matt
+
+We (Matt, Uri, Pablo) sit down to discuss how to transition after Ben C.’s departure.
+
+Topics:
+
+- Our challenges
+ - Prioritising across areas, keeping up with initiatives from other teams
+ - Balancing planning and doing
+ - Balancing maintenance, adhoc and long-term work
+ - Making sure maintenance is visible and timely
+- Team splits
+ - Engineering vs Analysts
+ - Leads vs ICs
+- Planning:
+ - Quarterly TMT planning
+ - Quarterly tech meeting
+ - Biweekly planning
+ - Daily
+ - Retrospectives
+- Quarterly retrospectives with you?
+- People mgmt topics
+ - Performance reviews and career planning
+ - Holiday planning
+
+---
+
+Stuff we discussed agreed:
+
+- Keep biweekly planning, invite stakeholders as needed when they need to come in.
+- Tip the balance of long-term vs adhoc more towards adhoc
+- Set weekly meetings with Matt (alternate planning with simple people catchups)
\ No newline at end of file
diff --git a/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md:Zone.Identifier b/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20241218 - Ways of working with Matt 1600446ff9c9801fa112d0ff4a431667.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md b/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md
new file mode 100644
index 0000000..d0d456d
--- /dev/null
+++ b/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md
@@ -0,0 +1,22 @@
+# 2024Q3 Data <> Tech Meeting
+
+**Agenda:**
+
+- How are you guys doing?
+- Heads-up: we want a new Data Engineer
+- Documentation
+ - Knowledge on data models and business context around them is key for Data execution
+ - We are currently struggling with this and we feel you as well
+ - Time soaking will only get worse as teams grow (N-N comms)
+ - How can we help improve this? (PS: we are already educating and insisting business and PMs on the importance of this)
+- Integrations and Dependencies (SQL Server and Cosmos DB)
+ - It’s been a good Q, thanks for that
+ - Looking forward to keeping it that way
+ - Current setup is very informal and lean… but it works, so let’s keep it simple as long as we can
+- A/B testing
+ - We want to give it a first shot during Q4 in the Guest Journey
+ - Very valuable long term but the capacity will need to be built over time
+ - Work with squads will have to be very tight
+ - Guest journey is the focus for now, but New Dash could be part of it eventually
+- Data Quality capabilities
+ - We are progressing, some capabilities might be useful for you
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md:Zone.Identifier b/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024Q3 Data Tech Meeting 9f3da234200443028fb178c882ceaf7d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md b/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md
new file mode 100644
index 0000000..fb9a96e
--- /dev/null
+++ b/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md
@@ -0,0 +1,7 @@
+# 2024Q3
+
+[Q3 Data Achievements ](Q3%20Data%20Achievements%201130446ff9c9800e84e4f03750b752a1.md)
+
+[Q3 OKRs drafting](Q3%20OKRs%20drafting%2033c62b60320849acbb01925a01f7a383.md)
+
+[2024Q3 Data <> Tech Meeting](2024Q3%20Data%20Tech%20Meeting%209f3da234200443028fb178c882ceaf7d.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md:Zone.Identifier b/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024Q3 ff7f97af85744bb4bf9a1c1f679ac50a.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md b/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md
new file mode 100644
index 0000000..8f5a2f8
--- /dev/null
+++ b/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md
@@ -0,0 +1,7 @@
+# 2024Q4
+
+[Q4 Data Scopes proposal](Q4%20Data%20Scopes%20proposal%2075bf38ab8092471d910840ab86b0ec60.md)
+
+[Q4 Data Achievements](Q4%20Data%20Achievements%201570446ff9c980b0a094ccfc9533bee4.md)
+
+[2024Q4 Data <> Tech Meeting](2024Q4%20Data%20Tech%20Meeting%2017a0446ff9c9802da22be93fea285cc4.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md:Zone.Identifier b/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024Q4 6420ae68694f4f86ab69bdce3b2dfa24.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md b/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md
new file mode 100644
index 0000000..7a3723b
--- /dev/null
+++ b/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md
@@ -0,0 +1,15 @@
+# 2024Q4 Data <> Tech Meeting
+
+**Agenda:**
+
+- How are you guys doing?
+- Heads-up: Pablo’s pat. leave
+ - Team is going to be limited, engineering wise
+ - Uri might need support at times
+- Data contracts & Dependency management
+ - We are becoming blockers more and more often
+ - We feel we need to explore ways to improve this before we hit deadlocking
+ - Should we improve our comms around data alerts? Should we share ownership more?
+- A/B testing retro
+- FX Rates are now shared and available for you
+- [Evidence.dev](http://Evidence.dev), is it of your interest?
\ No newline at end of file
diff --git a/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md:Zone.Identifier b/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2024Q4 Data Tech Meeting 17a0446ff9c9802da22be93fea285cc4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md b/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md
new file mode 100644
index 0000000..33030e4
--- /dev/null
+++ b/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md
@@ -0,0 +1,32 @@
+# 2025-01-22 - Data Planning
+
+### Done
+
+- RevOps - Active PMS in New Dashboard Reporting
+- KPIs - Invoiced Revenue refactor (+ data is now cut to April 2022)
+- KPIs - New Dash Invoiced Revenue now available
+- KPIs - New Onboarding MRR metric is available in Main KPIs
+- Finance - Guesty API fees changed on November 2024, report updated
+- Product - Finalised Guest Journey A/B test analysis with very good results
+- Rebranding - Hubspot integration to DWH remained unaffected
+
+### In Progress
+
+- Bugfix - Guest KPIs reporting has a strange connectivity issue
+- Finance - Check in Hero API reporting for invoicing purposes
+- Finance - Screen and Protect API reporting for invoicing purposes (currently stopped since there’s no clients)
+- Finance - Accounting aggregations reporting → This goes in line with improving Revenue accuracy in KPIs
+- Other - Excel tips and best practices documentation (low prio)
+- Other - Discontinue Superhog Reporting (legacy Power BI)
+
+### To Do (does not include critical subjects discussed last week)
+
+- Other - Understand Booking Fees / Cancelled Bookings decay in Dec 2024
+- KPIs/RevOps (Chloe) - Track Revenue Retained ratios in Main KPIs for graphical display over time
+- KPIs - Propose Billable Booking KPI definition for New Dash: 1 Booking can have multiple services invoiced in different times, how do we attribute them?
+- Product/RevOps - Include a user adoption funnel per service for New Dash, to identify adoption/upsell possibilities
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
+- KPIs - Rework Revenue display in Main KPIs
+- KPIs - OKR, target-based reporting
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md:Zone.Identifier b/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-01-22 - Data Planning 1830446ff9c980878e75c412ed07f0a4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md b/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md
new file mode 100644
index 0000000..52a983b
--- /dev/null
+++ b/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md
@@ -0,0 +1,43 @@
+# 2025-01-29 - Data Planning
+
+### General Updates
+
+- Start weekly syncs with Guy / Finance on KPIs - Suzannah
+- Engineering - Ingestion of backend data to billing db
+ - Data POV: It’s supposed to be carried out on Engineering side.
+ - Engineering POV: It’s supposed to be carried out in Data side.
+ - Pending discussion to clarify Uri x Ben on Wed 29th
+ - Uri’s POV: This is a Engineering architectural decision and implementation. This is a big no-no on our side, specially if Pablo is not here. It would take me several days, for a not robust implementation, for a critical project as invoicing - and we already had some issues on this regard. Might need your support on this.
+
+### Done
+
+- January invoicing incident resolution - [Incident Report](20250124-01%20-%20Booking%20invoicing%20incident%201880446ff9c9803fb830f8de24d97ebb.md)
+- Bugfix - Guest KPIs reporting has a strange connectivity issue - [Incident Report](20250122-01%20-%20Power%20BI%20Main%20Guest%20KPIs%20Bug%201840446ff9c980249355f34c58c4686e.md)
+- Finance - Check in Hero API reporting for invoicing purposes - [Link](https://app.powerbi.com/groups/me/apps/043c0aec-20b8-4318-9751-f7164b3634ad/reports/ca328a93-8d9d-431c-ac01-c646c81ba421/285e358d70a0c9155b23?experience=power-bi)
+- KPIs/RevOps (Chloe) - Track Revenue Retained ratios in Main KPIs for graphical display over time
+- Guesty Invoicing - Data quality issues misunderstanding for invoicing on Finance/APIs side
+- Other - Fix Data Request workflow
+- Guest Squad - A/B test mess fixed + retrospect upon - [Post mortem here](https://www.notion.so/Confusion-over-Fixed-vs-Relative-on-A-B-test-results-Incident-report-1850446ff9c9804f9fd7e004ed47d095?pvs=21)
+- Data - Fixed alerts that failed on Jan 29th
+
+### In Progress
+
+- Guesty - Resolutions payouts analysis
+- KPIs - Rework Onboarding MRR (avg per client + actual revenue expected)
+- Finance - Accounting aggregations reporting → This goes in line with improving Revenue accuracy in KPIs
+- Other - Understand Booking Fees / Cancelled Bookings decay in Dec 2024
+- KPIs - Propose Billable Booking KPI definition for New Dash: 1 Booking can have multiple services invoiced in different times, how do we attribute them?
+- Finance - Screen and Protect API reporting for invoicing purposes (currently stopped since there’s no clients)
+- Other - Discontinue Superhog Reporting (legacy Power BI)
+- Other - Excel tips and best practices documentation (low prio)
+
+### To Do (does not include critical subjects)
+
+- KPIs - Implement Billable Booking KPI for new dash after definition
+- KPIs - Rework Revenue display in Main KPIs
+- KPIs - OKR, target-based reporting
+- KPIs - New Dash vs. Old Dash category
+- Product/RevOps - Include a user adoption funnel per service for New Dash, to identify adoption/upsell possibilities
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md:Zone.Identifier b/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-01-29 - Data Planning 1890446ff9c9803281b2eba928ce1a86.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md b/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md
new file mode 100644
index 0000000..d10c109
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md
@@ -0,0 +1,53 @@
+# 2025-02-05 - Data Planning
+
+### General Updates
+
+- Started weekly syncs with Guy / Finance on KPIs - Suzannah
+- Engineering - Ingestion of backend data to billing db
+ - The proper solution has been delayed until Pablo is back to decide. In the meantime, one-short inputs carried out by Tech
+- Other - Discontinue Superhog Reporting (legacy Power BI)
+ - Apparently it was more widely used than expected
+ - The effort has shifted towards re-implementing the necessary bits (Listings, Bookings, Payments) but reading from DWH so Data has full control
+- On Waiver Payouts and Resolutions Payouts - Revenue Retained Post-Resolutions trends
+
+
+
+- I wonder if we should invest / start a working line on:
+ - Resolutions claims data (still pending integration, no news)
+ - Understanding client price plans / programs for upsell or detect edge cases
+
+### Done
+
+- Finance - Run Old Dash invoicing exports for January 2025
+- Guesty - Resolutions payouts analysis
+- KPIs - Invoiced data is now available on the 20th of the month for the previous month
+- KPIs - Rework Onboarding MRR (avg per client + actual revenue expected)
+- Finance - Accounting aggregations reporting (including client MoM comparison to spot incidents) - [Report here](https://app.powerbi.com/groups/me/apps/4a019abb-880f-4184-adc9-440ebd950e00/reports/9d97fb1e-505e-4592-8a37-d28526a93f4c/7659e1cc0a39b8c3d5cd?experience=power-bi)
+- Other - Understand Booking Fees / Cancelled Bookings - [First analysis completed](https://www.notion.so/2025-02-04-Booking-Fees-per-Billable-Booking-Decrease-1840446ff9c980588958c56a8b600d47?pvs=21)
+- Other - Help Leo on potential future verification invoicing for historical client Operto
+- Other - Investigated issues raised on New Dash reporting with Gus
+- Other - Several small requests, mostly from Finance. Re-insisted on using Data Requests workflow to avoid constant context switching
+
+### In Progress
+
+- Other - Re-implement Superhog Reporting reading from DWH
+- KPIs - Propose Billable Booking KPI definition for New Dash: 1 Booking can have multiple services invoiced in different times, how do we attribute them?
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them)
+- Product/RevOps - Include a user adoption funnel per service for New Dash, to identify adoption/upsell possibilities
+
+### Stopped / No advancement
+
+- Finance - Screen and Protect API reporting for invoicing purposes (currently stopped since there’s no clients)
+- Other - Excel tips and best practices documentation (no advancements, low prio)
+
+### To Do (does not include critical subjects)
+
+- KPIs - New Dash vs. Old Dash category
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id
+- KPIs - Implement Billable Booking KPI for new dash after definition
+- KPIs - Rework Revenue display in Main KPIs
+- KPIs - OKR, target-based reporting
+- KPIs - Rework Cancellation rates (attribute them to Check-out + ratio)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md:Zone.Identifier b/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-05 - Data Planning 1910446ff9c9803b8885da35ba2d9b71.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md b/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md
new file mode 100644
index 0000000..dd1fce4
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md
@@ -0,0 +1,41 @@
+# 2025-02-12 - Data Planning
+
+### General Updates
+
+- Joaquin off on Monday 17th + Uri off on Friday 21st
+
+### Done
+
+- Other - Re-implement [Superhog Reporting](https://app.powerbi.com/groups/me/apps/86bd5a07-0cd9-40ab-9e97-71816e3467e8/reports/fe54c090-ae85-4cfd-9f28-3d31ab486bc3/dfc2fe95ee1672c1bbdc?experience=power-bi) reading from DWH
+- KPIs - Propose Billable Booking KPI definition for New Dash: Agreed with Suzannah on 2 metric definition
+- KPIs - Rework Cancellation rates (attribute them to Check-out + ratio)
+- KPIs - [First draft of Main KPIs - Overview](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/50c56def523c2003b054?experience=power-bi)
+- Ad hoc requests - several completed on Finance/RevOps side
+- Invoicing Incident finally closed after Post Mortem
+
+### In Progress
+
+- KPIs - New Dash vs. Old Dash (vs. API) category
+- Product/RevOps - Include a user adoption funnel per service for New Dash, to identify adoption/upsell possibilities
+- Guests - Start discussing on the implementation for Guest Products and Single Payment - Multi Service refactor. Likely work to start soon.
+
+### Stopped / No advancement
+
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them) - Engineering to built it properly for us to exclude it properly
+- Finance - Screen and Protect API reporting for invoicing purposes (currently stopped since there’s no clients)
+- Other - Excel tips and best practices documentation (no advancements, low prio)
+
+### To Do (does not include critical subjects)
+
+- KPIs - Implement Billable Booking KPI for new dash after definition
+- KPIs - Rework Main KPIs overview (YTD+MTD)
+ - Might need creation of APIs KPIs (for Bookings mostly)
+- Resolutions - Ingest Resolution Centre Data into DWH
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id
+- Resolutions - DWH modelling
+- Resolutions - Reporting
+- Invoicing Incident - Further automation improvements: Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced, New Dash mostly)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
+- RevOps (Alex) - Client Cohorts: explore retention + key metrics to understand if it’s valuable for further client understanding
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md:Zone.Identifier b/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-12 - Data Planning 1970446ff9c980039759e389ac07cae9.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md b/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md
new file mode 100644
index 0000000..6de2318
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md
@@ -0,0 +1,45 @@
+# 2025-02-19 - Data Planning
+
+### General Updates
+
+- Uri off on Friday 21st (reminder)
+
+### Done
+
+- KPIs - New Dash vs. Old Dash (vs. API) category ([Main KPIs report](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi))
+- KPIs - Implement Live Deals ([Main KPIs report](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi))
+- KPIs - Implement Billable Booking KPI for new dash after definition ([Main KPIs report](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi))
+- Product/RevOps - Include a user adoption funnel per service for New Dash, to identify adoption/upsell possibilities ([New Dash - Offered Services report](https://app.powerbi.com/groups/me/apps/d6a99cb6-fad1-4e92-bce1-254dcff0d9a2/reports/44d8eee3-e1e6-474a-9626-868a5756ba83/99f744c80b91c605a7a1?ctid=862842df-2998-4826-bea9-b726bc01d3a7&experience=power-bi))
+- Ad hoc requests, specially on Home Team Vacations Rentals (Kayla) → Risk of losing Booking fees after decrease from 10 USD to 6 USD
+- Guests - Align for Illustrations A/B test launching next week
+- Resolutions - Ingest Resolution Centre Data into DWH
+- Data internal - Fixed CPU consumption
+- Data internal - Fixed New Dash Reporting being down after release
+
+### In Progress
+
+- Finance - Screen and Protect API reporting for invoicing purposes
+- Other - Excel tips and best practices documentation (reviewed) - How do you want to proceed? Share resources and/or schedule session? → Session + Resources
+- KPIs - Main KPIs overview (YTD+MTD) - First partial delivery
+
+### Stopped / No advancement
+
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them) - Engineering to built it properly for us to exclude it properly
+- Guests - Start discussing on the implementation for Guest Products and Single Payment - Multi Service refactor. (No further news)
+
+### To Do (does not include critical subjects)
+
+- Report to support Pass The Keys migration to New Dash
+- Resolutions - DWH modelling
+- KPIs - Main KPIs overview (YTD+MTD) - Second delivery
+ - Creation of APIs KPIs (for Bookings mostly)
+ - Revenue Churn & MRR metrics
+ - Targets
+- Guests - Adapt A/B test monitoring with specs of Illustrations A/B test
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id
+- Resolutions - Reporting
+- Invoicing Incident - Further automation improvements: Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced, New Dash mostly)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
+- RevOps (Alex) - Client Cohorts: explore retention + key metrics to understand if it’s valuable for further client understanding
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md:Zone.Identifier b/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-19 - Data Planning 19e0446ff9c98063be3df87905cc8ca4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md b/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md
new file mode 100644
index 0000000..3bac2c6
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md
@@ -0,0 +1,43 @@
+# 2025-02-26 - Data Planning
+
+### Done
+
+- Finance - Screen and Protect API reporting for invoicing purposes ([Report here](https://app.powerbi.com/groups/me/apps/043c0aec-20b8-4318-9751-f7164b3634ad/reports/96e5c7c2-d65a-4375-b706-61255498d7ae/c1f8e5bfc0385782a6b6?experience=power-bi). Waiting for Data Quality fixes from Ray side)
+- Finance - Active PMS report now reading from DWH with small improvements ([Report here](https://app.powerbi.com/groups/me/apps/86bd5a07-0cd9-40ab-9e97-71816e3467e8/reports/244d6d40-5c0e-4c66-87b7-f040ca37bfd8/39b56ffe553b8d842f2e?experience=power-bi)).
+- Other - Excel tips and best practices documentation ([Data Resources here](https://www.notion.so/Data-Resources-1520446ff9c98045b44bd670f7bf3605?pvs=21))
+- Report to support Pass The Keys migration to New Dash (Improved [Payments - Details](https://app.powerbi.com/groups/me/apps/86bd5a07-0cd9-40ab-9e97-71816e3467e8/reports/992a437e-35c8-4aea-b908-5753655dc401/3376bcba0aa3617402da?experience=power-bi))
+- Resolutions - DWH modelling
+- KPIs - Main KPIs overview (YTD+MTD) - Second delivery
+ - Revenue Churn & MRR metrics
+ - Targets (first version)
+- Cleaning & Data quality
+ - Remove unnecessary ID of Athena (Guesty) models
+ - Raise and fix issue on a client having thousands of GJ Created that were duplicated
+ - Tag if a New Dash Booking has a GJ
+ - Small fixes on S&P report aggregates
+
+### In Progress
+
+- KPIs - Main KPIs overview (YTD+MTD) - Second delivery
+ - Creation of APIs KPIs (for Bookings mostly)
+ - Targets Refinement
+- OKRs & Business Strategy
+- Guests - A/B test monitoring ongoing (launched 25th Feb)
+- Resolutions - Reporting
+
+### Stopped / No advancement
+
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them) - Engineering to built it properly for us to exclude it properly
+- Guests - Start discussing on the implementation for Guest Products and Single Payment - Multi Service refactor. (No further news)
+
+### To Do (does not include critical subjects)
+
+- Excel training session (scheduled)
+- Configure Guest Agreement service for New Dash after release (low effort)
+- Resolutions - Automate manual tasks for Finance (waiting for specs)
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id
+- Invoicing Incident - Further automation improvements: Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced, New Dash mostly)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
+- RevOps (Alex) - Client Cohorts: explore retention + key metrics to understand if it’s valuable for further client understanding
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md:Zone.Identifier b/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-02-26 - Data Planning 1a60446ff9c980d7974cd6d6a1314068.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md b/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md
new file mode 100644
index 0000000..7d498b4
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md
@@ -0,0 +1,54 @@
+# 2025-03-05 - Data Planning
+
+### Done
+
+- Incident - Verification Bulk Update (resolved) - Available [here](20250304-01%20-%20Verification%20Bulk%20Update%201ad0446ff9c9806faa8bf7673e7ed6a5.md)
+- KPIs - Main KPIs overview (YTD+MTD) - Second delivery - Available [here](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/43ac2f2995c23bfb4004?experience=power-bi)
+ - Targets for FY 2025 hidden to avoid misconceptions
+- OKRs & Business Strategy
+- Resolutions - New Resolutions reporting - Available [here](https://app.powerbi.com/groups/me/apps/fc6bf877-6175-413d-98e0-da8eb03d807e/reports/87641841-181e-4b10-933e-4ad1ed465607/9d2ccab758d14dfbc8ce?experience=power-bi)
+- Power BI Truvi Rebranding
+ - [Main KPIs](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/43ac2f2995c23bfb4004?experience=power-bi)
+ - [Truvi Reporting](https://app.powerbi.com/groups/me/apps/86bd5a07-0cd9-40ab-9e97-71816e3467e8/reports/fe54c090-ae85-4cfd-9f28-3d31ab486bc3/dfc2fe95ee1672c1bbdc?experience=power-bi) (previously Superhog reporting)
+- New Dash - Configure Guest Agreement service for New Dash after release
+- Finance - Ensure proper month attribution of Hyperline invoicing for invoiced revenue reporting purposes (affects Main KPIs, AMs Reporting, Accounting Reports)
+- Data Captain requests as usual
+- Guests - Start discussing on the implementation for Guest Products and Single Payment - Multi Service refactor.
+
+### In Progress
+
+- Power BI Truvi Rebranding - rest of reports
+- Guests - A/B test monitoring ongoing (launched 25th Feb)
+- Excel training session (scheduled)
+
+### Stopped / No advancement
+
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them) - Engineering to built it properly for us to exclude it properly
+
+### To Do (does not include critical subjects)
+
+- Guest Products - Single Payment / Multi Service refactor - Starting Tuesday 11th March
+- Finance/Resolutions - Automate manual tasks for Finance (waiting for specs)
+- Re-visit targets for FY2026 (need Nathan input)→ Talk to Guy / Matt / Nathan - Meeting on Monday
+- Update automation project backbone data
+- Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id
+- Idea - Improvements on AM reports
+ - Rework Score so it captures projected revenue rather than last month revenue (i.e., gain ~1 month trend visibility). In other words - be able as of 1st of March to detect that February was bad for HTVR, instead of 1st of April as is now.
+ - Include Price Plans / Offering - New Dash vs. Old Dash
+ - Include API/New Dash/Old Dash category
+ - Include rank and share per Revenue Retained Post Resolutions
+ - Add a per Account Manager “welcome” page with the main indicators / alerts of the accounts per Account Manager.
+ - + potentially other ideas available below (Churn - Kayla, Cohorts - Alex)
+- Idea - Resolutions KPIs
+ - Blocked - We need to capture all Resolutions to be able to build proper metrics
+ - Modelling KPIs data
+ - Main KPIs exposure
+ - Specific reporting
+- Invoicing Incident - Further automation improvements: Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced, New Dash mostly)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it, if recent created bookings per account is decreasing, display last time account was contacted. This could be potentially the first step towards a “Account Manager KPIs” for AM teams
+- RevOps (Kayla) - Churn tracking: see if we can automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc)
+- RevOps (Alex) - Client Cohorts: explore retention + key metrics to understand if it’s valuable for further client understanding
+- Idea - Guest Journey A/B test report
+ - Avoid manual runs on Data side
+ - Provide deeper level of detail
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md:Zone.Identifier b/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-05 - Data Planning 1ad0446ff9c9807aa104ef7a24b97d9e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md b/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md
new file mode 100644
index 0000000..5476b05
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md
@@ -0,0 +1,67 @@
+# 2025-03-12 - Data Planning
+
+## General Updates
+
+- Performance review - how is it going to happen?
+
+### Done
+
+- Guests - Further discussion on the implementation for Guest Products and Single Payment - Multi Service refactor.
+- RevOps - Churn tracking - automate manual monthly Churn reports and enhance it with other data (revenue last 12 m, etc) - Report available [here](https://app.powerbi.com/groups/me/apps/bb1a782f-cccc-4427-ab1a-efc207d49b62/reports/d4955aad-1550-46c7-9549-2bdeebb99286/3555842421d87b032c4e?experience=power-bi)
+- RevOps - Churn prevention - display last time account was contacted
+- RevOps - Account Management - Include API/Platform Deal category
+- Bugfixes - Active PMS & New Dash offered services
+- Other - Slides for TMT Data deep-dive
+- Other - General support
+ - Support Finance on Invoicing subjects
+ - Follow up HTVR
+ - Follow up data drift due to user migration old dash → new dash - We loose history on Price Plans…
+ - Couple of data quality alerts
+ - Other Data Captain ad-hoc requests as usual
+
+### In Progress
+
+- TMT - Targets for FY2026
+ - Adapt based on Financial model almost ready - no stretch. Concern on Billable Bookings/Live Deals figure
+ - Data quality improvements in PBI
+ - Deal metrics to align with RevOps KPIs
+ - Platform Billable Bookings (est.) - Understand if there’s better ways to track this figure
+ - We need to adapt the metric names on PBI to align with Finance’s naming. I agree, but it requires quite a bit of work.
+- RevOps - Churn prevention - forecast created bookings per account to the end of the month and alert if there’s a decrease. Booking projection needed for Revenue projection too.
+- Product - Guest Journey New Illustrations A/B test monitoring ongoing (launched 25th Feb) - Looking very good
+- Product - Guest Products - Single Payment / Multi Service refactor
+ - DWH modelling
+ - Old Dash Waiver Extracts for invoicing purposes
+- Other - Excel training session (scheduled for today)
+- Other - Power BI Truvi Rebranding - rest of reports (low prio)
+
+### Stopped / No advancement
+
+- KPIs/New Dash/AM reporting - Exclude known test accounts to increase data quality (won’t remove all of them) - Engineering to built it properly for us to exclude it properly
+- RevOps - Update automation project backbone data - Need further input
+- Finance/Resolutions - Automate manual tasks for Finance (waiting for specs)
+
+### To Do (does not include critical subjects)
+
+- ~~Finance - Update legacy (old dash) invoicing exporter to show unit price and quantity besides total price + Deal id~~ Discussed with Nathan, no longer needed.
+- RevOps (Kayla) - Churn prevention: alerting system if user had a PMS and no longer has it
+- Idea - Improvements on AM reports
+ - Rework Score so it captures projected revenue rather than last month revenue (i.e., gain ~1 month trend visibility). In other words - be able as of 1st of March to detect that February was bad for HTVR, instead of 1st of April as is now.
+ - Add a per Account Manager “welcome” page with the main indicators / alerts of the accounts per Account Manager.
+ - Include Price Plans / Offering - New Dash vs. Old Dash
+ - Include rank and share per Revenue Retained Post Resolutions
+- API KPIs
+ - We need at least bookings to compute Total Bookings (Platform Billable Bookings + API Bookings)
+- Idea - Resolutions KPIs
+ - Blocked - We need to capture all Resolutions to be able to build proper metrics
+ - Modelling KPIs data
+ - Main KPIs exposure
+ - Specific reporting
+- Invoicing Incident - Further automation improvements - Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced, New Dash mostly)
+- KPIs - Payment Count and Average Amount per Payment Count (Waiver/Deposit Fee/etc)
+- RevOps (Alex) - Client Cohorts - explore retention + key metrics to understand if it’s valuable for further client understanding
+- RevOps (Alex) - RevOps KPIs - Automate RevOps KPIs sheet in PBI and add additional content (revenue, etc). Similar approach as for Churn.
+- Product (Daga) - New Dash - Differentiate OSL/Manual/PMS bookings
+- Idea - Guest Journey A/B test report
+ - Avoid manual runs on Data side
+ - Provide deeper level of detail
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md:Zone.Identifier b/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-12 - Data Planning 1b40446ff9c98043a80bf1520165e3a4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md b/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md
new file mode 100644
index 0000000..5bd48b7
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md
@@ -0,0 +1,72 @@
+# 2025-03-17 - Glad you’re back, Daddy Pablo
+
+Some things happened when you were not here, so here’s a summary!
+
+# Incidents
+
+Important one:
+
+[20250124-01 - Booking invoicing incident](20250124-01%20-%20Booking%20invoicing%20incident%201880446ff9c9803fb830f8de24d97ebb.md)
+
+Other incidents:
+
+[20250304-01 - Verification Bulk Update](20250304-01%20-%20Verification%20Bulk%20Update%201ad0446ff9c9806faa8bf7673e7ed6a5.md)
+
+[20250122-01 - Power BI Main Guest KPIs Bug](20250122-01%20-%20Power%20BI%20Main%20Guest%20KPIs%20Bug%201840446ff9c980249355f34c58c4686e.md)
+
+# Power BI main updates
+
+- Main KPIs now have a proper KPI tracking (3 new tabs) and is able to compare vs. targets. Small overall improvements.
+- Account Managers now has a Churn dedicated report. Small overall improvements.
+- Accounting Reports now have a Finance aggregation (stated on a seeds), to show a monthly aggregation and per deal tracking. Xero data also contains Hyperline billing and we have these invoices/credit notes identified
+- Truvi Reporting (previous SH reporting) is now fully reading from DWH.
+- New Resolutions Report is available in a dedicated app.
+- New Dashboard report now has an Offered Service tab. Small overall improvements. Be aware that the link to the app changed.
+- API Reports now have the Screen and Protect invoicing report - still no clients.
+- Truvi Rebranding is ongoing.
+
+# Key Priorities
+
+- Take an eye on the Data infra, despite nothing really extremely broke. My knowledge is limited on this regard but:
+ - We had the “issue” with CPU being around 100%. It happened again on the 12th of March, Xero integration in Airbyte was stuck but managed to manually fix it
+ - I received an email from Azure on VM stuff
+ - We did several data integrations from Cosmos (Resolutions) and new tables from the backend (Core), might be worth double checking
+- Multi-service single payment & Guest Products - ongoing. Involves DWH (due to backend) and Old Invoicing (Stripe metadata) changes
+- KPIs vs. targets for Financial Year 2026 + reporting (part of business strategy)
+- Churn prevention - data-based alerting system. Good opportunity to leverage KPIs
+- Data drifts and data quality - we need a better way to align with tech without us being blockers. Data contracts could be a possibility. Forcing full-refreshes on incremental models once every X days could be interesting.
+- We need to align on how do we want to organise for Q2. A few thoughts:
+ - Aim for very few, key areas, really focusing on the must haves
+ - Aim for very few things that we want to do as a must have for Q2 internally in Data
+ - Allow plenty of space and capacity for flexibility - has worked very well in Q1
+- Hyperline initiatives should be good for the moment on Data side, but we’re doing quite a bit of support to Finance considering user migration from old to new dash and so on. We might need to start pulling data from Hyperline at some point (but no immediate rush).
+
+# Other stuff
+
+- You might need to contact Cigna for medical insurance if you want to opt for it
+- We will be having a Personal Development Review (PDR) by the end of March
+- We should take a look at what improvement can be made to the data in HubSpot. Now working with Alex to sync our displayed deals lifecycle we have discovered a lot of issues with the data in HubSpot.
+
+# Domain Analyst - Batch #2
+
+Uri did a general prez for TMT and leads and there’s quite a bit of interest on Domain Analysts.
+
+Potential candidates:
+
+- Chloe (Resolutions) - Proposed herself, motivated to learn
+- Maha (Marketing) - Doing a Data Analytics course, makes tons of sense - to be discussed in depth
+- Daga (Product) - Discussed in the past, would be good to have in-depth New Dash visibility
+- Lisa (Finance) - Nathan explained the program, she is interested, would be good to regain a Domain Analyst in Finance
+
+We didn’t have a proper page explaining Domain Analyst program, so I created one. Feel free to challenge: [https://www.notion.so/truvi/Domain-Analyst-Program-1b60446ff9c980e58ab1fef0e3909085](https://www.notion.so/Domain-Analyst-Program-1b60446ff9c980e58ab1fef0e3909085?pvs=21)
+
+# Fun things
+
+- Comilona this Wednesday.
+
+# Pablo’s own notes
+
+- [ ] Ask about this: > You might need to contact Cigna for medical insurance if you want to opt for it
+- [x] Review exporter code
+- [ ] Review PBIs
+- [ ] Review new seeds and schemas in dbt
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md:Zone.Identifier b/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-17 - Glad you’re back, Daddy Pablo 1b40446ff9c980ce837edb9154593919.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md b/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md
new file mode 100644
index 0000000..87e203c
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md
@@ -0,0 +1,26 @@
+# 2025-03-19 - Data Planning
+
+### Done
+
+- RevOps - Churn prevention report improvements
+- TMT - Targets for FY2026
+ - Adapt based on Financial model almost ready - no stretch.
+ - Data quality improvements in PBI
+ - Deal metrics to align with RevOps KPIs
+- Other - Excel training session
+
+### In Progress
+
+- Finance - Align the metric names Data x Finance
+- TMT - Targets + Simple report
+- RevOps - Alerting! - forecast created bookings per account to the end of the month and alert if there’s a decrease. Booking projection needed for Revenue projection too.
+- Product - Guest Journey New Illustrations A/B test monitoring ongoing (launched 25th Feb)
+- Product - Guest Products - Single Payment / Multi Service refactor
+- Data - Automatic CI quality checks
+
+### To Do (does not include critical subjects)
+
+- New Dash KPIs - Service adoption & service revenue streams
+ - Adoption rates of each service
+ - Total Revenue + per Booking - to compare actual revenue (invoiced) vs. the chargeable vs. the discounts
+- Decide what to do regarding Domain Analyst program
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md:Zone.Identifier b/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-19 - Data Planning 1bb0446ff9c98072bdbfcc71ff6a028b.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md b/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md
new file mode 100644
index 0000000..a09e154
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md
@@ -0,0 +1,26 @@
+# 2025-03-26 - Data Planning
+
+### To discuss
+
+- We automate or we don’t
+
+### Done
+
+- TMT - Targets + Simple report
+- Data - Automatic CI quality checks
+- Support HTVR invoicing + autohost issue
+- Follow up data quality issues on backend
+
+### In Progress
+
+- Migration Old Dash to New Dash - Update input
+- New Dash - Service adoption
+- RevOps - Alerting! - forecast created bookings per account to the end of the month and alert if there’s a decrease. Booking projection needed for Revenue projection too.
+- Product - Guest Journey New Illustrations A/B test monitoring ongoing (launched 25th Feb)
+- Product - Guest Products - Single Payment / Multi Service refactor
+
+### To Do (does not include critical subjects)
+
+- New Dash - Revenue service streams + linked to Resolutions + protections = Booking P&L
+- Finance - Align the metric names Data x Finance
+- Decide what to do regarding Domain Analyst program
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md:Zone.Identifier b/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-03-26 - Data Planning 1c20446ff9c980539269f1a4871bb0c7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md b/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md
new file mode 100644
index 0000000..a5db62f
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md
@@ -0,0 +1,26 @@
+# 2025-04-09 - Data Planning
+
+### To discuss
+
+- PDRs
+
+### Done
+
+- Updated file for Kayla on Old Dash to New Dash migration
+- New Dash services adoption
+- Account Performance report
+- Many data alerts due to Backend bugs
+- Internal:
+ - CI in DWH
+ - Refactor KPIs in DWH
+
+### In Progress
+
+- Domain Analysts
+ - Discussions / introductions scheduled with everyone
+ - Training being prepared
+ - Exploring tooling
+- Flagging project
+ - Pending conversations with Tech and Resolutions
+- Data quality issues management w. Ben
+- Account alerting / more timely information
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md:Zone.Identifier b/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-09 - Data Planning 1d00446ff9c98009a080e7cb8c5732af.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md b/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md
new file mode 100644
index 0000000..68c7cb8
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md
@@ -0,0 +1,32 @@
+# 2025-04-16 - Data Planning
+
+### Done
+
+- Flagging project
+ - Conversations with Tech and Resolutions
+ - First monitoring system implemented
+- Domain Analysts
+ - Discussions / introductions with everyone
+- Airbnb data request
+- Data quality issues management w. Ben
+ - New board shared between Tech x Data to track and fix DQ issues
+- Fix data quality issue on Revenue metrics
+- Small improvements on many reports, ex: Churn Report now allows multi-month selection
+- Illustration A/B test to be finished this week, no significant results
+
+### In Progress
+
+- Flagging project
+ - Pending conversation to present first results
+ - We need more data
+- Ensure Old Dash Invoicing does not capture New Dash Bookings
+ - We have alignment with Tech & Finance, implementing changes
+- Domain Analysts
+ - Training being prepared
+ - Exploring tooling
+- Prepare launch of next A/B test - Welcome page visual changes
+- Account alerting / more timely information
+- Capture CIH API invoiced revenue in KPIs & Accounting reports
+- Prepare for Account Managers to Customer Relations changes
+ - Account Manager will become obsolete → affects many reports
+ - Rely on new RRPR-based segmentation; discussion ongoing
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md:Zone.Identifier b/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-16 - Data Planning 1d60446ff9c980aca796c1791efc320e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md b/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md
new file mode 100644
index 0000000..866f9be
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md
@@ -0,0 +1,29 @@
+# 2025-04-30 - Data Planning
+
+- Humphrey causing confusion with Screen & Protect price structure
+
+### Done
+
+- Screening and Protection relationship (we’ll find a better name) project
+ - Requested input from Data team
+ - First preliminary analysis conducted
+- Ensure Old Dash Invoicing does not capture New Dash Bookings
+- Domain Analysts
+ - Lisa, Chloe, Maha and Daga
+ - Kick off conducted, programme started
+ - Currently doing the first Excel levelling assessment
+- Capture CIH API invoiced revenue in KPIs & Accounting reports
+- Host Resolutions appearing in both Bank Transactions and Credit Notes
+
+### In Progress
+
+- ~~Flagging~~ project
+ - Tracking performance continuously until we have more data
+ - Gathering feedback from customer facing colleagues
+- Domain Analysts
+ - Training being prepared continuously
+ - Exploring tooling
+- Launch of next A/B test - Welcome page visual changes
+ - Keeping an eye on performance
+- Account alerting / more timely information
+ - New Report for Account Growth and Impact currently under internal review
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md:Zone.Identifier b/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-04-30 - Data Planning 1e50446ff9c9806983f3c2c7de69b3fb.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md b/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md
new file mode 100644
index 0000000..63f38db
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md
@@ -0,0 +1,41 @@
+# 2025-05-07 - Data Planning
+
+- Pablo off next week 12-16 May
+- Joaquin off tomorrow 8 May, working remotely from Chile from 12-16. Off from 19-28 May
+- Uri is around
+
+### Done
+
+- Data-Driven Risk Assessment (DDRA) project
+ - EDA on Resolution Incidents: [2025-05-02 Exploratory Data Analysis on Resolution Incidents](https://www.notion.so/2025-05-02-Exploratory-Data-Analysis-on-Resolution-Incidents-1e70446ff9c98043b263e3b2eadb79fb?pvs=21)
+ - Gathered feedback from business-facing teams
+- Account alerting / more timely information
+ - New Report for Account Growth and Impact released. [Report available here](https://app.powerbi.com/groups/me/apps/bb1a782f-cccc-4427-ab1a-efc207d49b62/reports/3e1819f4-7069-49e1-8c6b-2e7527d596e3/ReportSectionddc493aece54c925670a?experience=power-bi)
+ - Previously existing report of Account Managers Overview will be eliminated on May 23rd
+- Improvements on Host Resolutions Payments Report with new tab Account Rankings. [Report available here](https://app.powerbi.com/groups/me/apps/4a019abb-880f-4184-adc9-440ebd950e00/reports/86abbd2f-bfa5-4a51-adf5-4c7a3be9de07/7087b20b3e118306020e?experience=power-bi)
+- Domain Analysts
+ - Daga, Maha, Chloe
+ - Excel levelling test completed, new assignment for next week
+ - We expect closure of Excel training by next week
+ - Lisa
+ - Excel training completed, starting SQL training
+- Business as usual
+ - Data alerts follow up and fixes
+ - Data requests handling
+
+### In Progress
+
+- Stripe vs. Backend payments data discrepancies
+ - Early results ~3% missing payments in backend with respect to Stripe
+- DDRA
+ - Continuous monitoring of New Dash Protected Bookings performance
+ - Start phase 2
+- Domain Analysts
+ - Training being prepared continuously
+ - Exploring tooling
+- Confident Stay (Guest Products + Single Payment / Multi Service
+ - Resuming work
+- A/B test - Welcome page visual changes
+ - Keeping an eye on performance
+- Data team internal
+ - Fixing DWH CI
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md:Zone.Identifier b/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-07 - Data Planning 1ec0446ff9c980929fe1cb35108c6436.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md b/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md
new file mode 100644
index 0000000..2055f13
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md
@@ -0,0 +1,46 @@
+# 2025-05-14 - Data Planning
+
+### Data Team Updates
+
+- Pablo off, back on Monday
+- Joaquin working remotely from Chile this week. Off from 19-28 May
+- Uri is around
+
+### Confident Stay
+
+- Done:
+ - Sync with Guest Squad on latest state of Confident Stay
+ - Finished DWH internal refactor to accommodate for new incoming logic of Guest Products
+- Next:
+ - Integrate Guest Products from the new flow into Guest Journey Payments (Check In Hero + Confident Stay)
+ - Continue DWH modelling on Guest Products
+
+### Domain Analysts
+
+- Chloe dropped the course
+- Done:
+ - Daga + Maha: Excel training concluded successfully
+- Next:
+ - Launching SQL training for Daga + Maha
+ - We’ll take both Daga + Maha and Lisa while Joaquin is off
+ - Continuously exploring tooling
+
+### Business as usual
+
+- Done:
+ - Data alerts follow up and fixes
+ - Data requests handling
+
+### Stripe vs. Backend payments data discrepancies
+
+- In hold until Pablo is back
+- Early results ~3% missing payments in backend with respect to Stripe
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Continuous monitoring of New Dash Protected Bookings performance
+- Start phase 2 (if we have time)
+
+### A/B test - Welcome page visual changes
+
+- Continuous monitoring
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md:Zone.Identifier b/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-14 - Data Planning 1f30446ff9c98022bcbae63f192a3e11.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md b/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md
new file mode 100644
index 0000000..6967c64
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md
@@ -0,0 +1,66 @@
+# 2025-05-21 - Data Planning
+
+### Data Team Updates
+
+- Joaquin off from 19-28 May
+
+### On Billable Bookings emergency
+
+- Done:
+ - Several analysis + support + gathering outputs:
+ - [Created Bookings evolution Old Dash → New Dash per account](https://www.notion.so/Created-Bookings-evolution-Old-Dash-New-Dash-per-account-1f50446ff9c9803ca922c2341bd714c2?pvs=21)
+ - New report: New Dash Onboarding
+- To discuss:
+ - Our North Star: We are committed to delivering dependable screening and protection services, to build a profitable and **sustainable** business. **
+ - We’re not far away from profitability, but this needs to go through the **sustainable** part.
+ - Billable Bookings investigation has raised several invoicing-related issues that are KEY to reach our goal. Invoicing clients correctly is core business. We have the feeling that this is not properly supported, as we spend more time discussing on new initiatives (ex: Pet Waiver) than fixing what needs to be fixed. Core business should work like a charm, and it’s clearly not the case.
+ - Decisions:
+ - Onboarding process being refined + also historical clients will be reviewed
+ - Invoicing
+ - New Dash: services being invoiced in different times, and it’s built wrongly. From 1st of June, all billing will be happening on verification start
+
+### Screen and Protect API
+
+- Discovered today that what the fees we report in PBI for the only client in S&P are wrong.
+- S&P expects nightly fees. First client already is an exception and is charged per-booking.
+- No one has contacted us about this, nor updated the logic on the documentation.
+
+### Confident Stay
+
+- Done:
+ - Integrate Guest Products from the new flow into Guest Journey Payments (Check In Hero + Confident Stay)
+ - Performance optimisations
+- Next:
+ - Continue DWH modelling on Guest Products
+ - Ensure Check In Hero reporting makes it through the change
+ - Prepare confident stay reporting
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Ongoing:
+ - Continuous monitoring of New Dash Protected Bookings performance
+- Next:
+ - Start phase 2
+ - Would like to have started but other things on the table
+
+### Stripe vs. Backend payments data discrepancies
+
+- Work in progress
+- But there is definitely an issue
+- Having a hard time nailing because of other priorities
+
+### Domain Analysts
+
+- Ongoing:
+ - SQL training for Daga + Maha + Lisa
+
+### A/B test - London Wallpaper & visual changes
+
+- Continuous monitoring of current London Wallpaper A/B test
+- Discussion with Guest Squad on new potential A/B test. Not comfortable with tracking or implementation plan, put in hold.
+
+### Business as usual
+
+- Done:
+ - Data alerts follow up and fixes
+ - Data requests handling
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md:Zone.Identifier b/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-21 - Data Planning 1fa0446ff9c980f4a7b0d29c47c12c12.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md b/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md
new file mode 100644
index 0000000..b77c2ce
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md
@@ -0,0 +1,37 @@
+# 2025-05-28 - Data Planning
+
+### Off Topics
+
+- Joaquin is back tomorrow
+- Confident Stay launch is messy (it was expected to be launched on a Saturday, misalignment on New Dash/Old Dash release, etc)
+- VOTC feedback
+- Billable Bookings & Invoicing & Onboarding
+
+### Confident Stay & Check In Hero
+
+- Done
+ - Confident Stay available in KPIs
+ - Confident Stay dependants (Guest Revenue, Total Revenue, RRPR, etc) updated accordingly in KPIs
+- In Progress
+ - Continue DWH modelling on Guest Products
+ - Ensure Check In Hero reporting makes it through the change
+ - Prepare confident stay reporting
+
+### Domain Analysts
+
+- SQL training for Daga + Maha + Lisa
+- Access to DWH
+
+### New Dash Reporting
+
+- Improvements on New Dash Reporting Overview, including Check In date & Basic Screening Bookings
+
+### A/B test Guest Journey
+
+- A/B test London Wallpaper finished yesterday 27th May, results [here](https://www.notion.so/2025-05-27-Guest-Journey-London-Wallpaper-A-B-Test-Results-2000446ff9c9800d86f2d3bcfdbbec42?pvs=21)
+ - Mostly not significant results, likely positive CSAT - must re-do in the future again with more cities
+- A/B test Your Trip Questionaire launched yesterday 27th May, details [here](https://www.notion.so/2025-Q2-2-Your-Trip-Questionaire-Guest-Journey-A-B-test-1f90446ff9c980a296b9ecb47cad21ef?pvs=21)
+
+### Data-Driven Risk Assessment (DDRA)
+
+- No news, didn’t have time to work on this. Likely retaking it this week
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md:Zone.Identifier b/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-05-28 - Data Planning 2010446ff9c980c3b428dd7d76aaffb5.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md b/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md
new file mode 100644
index 0000000..573e9f7
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md
@@ -0,0 +1,40 @@
+# 2025-06-04 - Data Planning
+
+### Off Topics
+
+- Monday 9th is day off in Barcelona
+- On yesterday’s Q3 meeting
+ - Billing doesn’t get enough attention
+ - Still prioritising new initiatives without clear impact or effort estimation while there’s clear things to fix that provide value
+ - Respect timings
+ - Democratic prioritisation does not work
+
+### Confident Stay & Check In Hero
+
+- Confident Stay Reporting is now live in Guest Insights
+- All Check In Hero reports have moved to Guest Insights
+- Ensure Check In Hero reporting makes it through the change
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Setting everything up for experimentation
+- First baseline (randomly flagging 1% of bookings) set up
+- First experiment ongoing
+
+### Domain Analysts
+
+- SQL training for Daga + Maha + Lisa
+- Access to DWH
+
+### New Dash Reporting
+
+- New Check In Bookings tab
+
+### Accounting Reports
+
+- Improvements ongoing to capture accounts with higher due amounts
+- Budget tracker exploration
+
+### A/B test Guest Journey
+
+- Continuous monitoring, no relevant results yet
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md:Zone.Identifier b/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-04 - Data Planning 2080446ff9c9803cba09d8b32b43501d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md b/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md
new file mode 100644
index 0000000..bf0bcc1
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md
@@ -0,0 +1,36 @@
+# 2025-06-11 - Data Planning
+
+### Off Topics
+
+- Screen & Protect API - Pricing/Invoicing
+- Q3 Data objectives planning
+ - We survived the first semester chaos
+ - We want to organise again and have room for long-term scopes
+- Data Insights TMT session
+ - Can we have 4 monthly meetings instead of 1 weekly?
+
+### Confident Stay & Check In Hero
+
+- Reporting done, being reviewed after launch
+- Next step: Stripe process for Waivers
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Internal Data team kick-off tomorrow, expecting to dedicate huge amount of work in the coming weeks
+- Expecting having first insights by beg. July
+
+### Domain Analysts
+
+- SQL training for Daga + Maha + Lisa
+
+### Accounting Reports
+
+- Budget tracker waiting for Finance team
+
+### Data quality improvements
+
+- Removed all test accounts and their activity (bookings, etc) from all reports
+
+### A/B test Guest Journey
+
+- Continuous monitoring, so far it seems that new questions do not have a negative dramatic effect
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md:Zone.Identifier b/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-11 - Data Planning 20f0446ff9c980269e0bddf562b133a0.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md b/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md
new file mode 100644
index 0000000..6695f6b
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md
@@ -0,0 +1,35 @@
+# 2025-06-18 - Data Planning
+
+### Off Topics
+
+- Pablo off this week, Uri off next week - Tuesday no one is here
+- Flex API - Reporting. Why is this a priority, if it’s not Truvi way?
+- Data team internal collaboration changes: Data Captain will send the Business Targets every Friday
+
+### Finance & Data session
+
+- Slides here: https://guardhog.sharepoint.com/:p:/r/sites/DataTeam/_layouts/15/Doc.aspx?sourcedoc=%7BEEC0649B-164D-47DC-AC55-DBFF66783350%7D&file=202506%20-%20TMT%20-%20FINANCE%20%26%20DATA.pptx&action=edit&mobileredirect=true
+- Nathan will focus on cost control and more in-depth financial subjects
+- Uri will focus on Top-Down Revenue issues
+- Important: Should we put in hold the migration of bigger clients at least until mid September? Based on insights, it seems risky doing this in peak season.
+
+### New Dash
+
+- New user-alert tagging in place
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Ongoing: 3 separated explorations being carried out individually
+- Expecting having first insights by beg. July
+
+### Domain Analysts
+
+- SQL training for Daga + Maha + Lisa
+
+### Accounting Reports
+
+- Small improvements
+
+### A/B test Guest Journey
+
+- Continuous monitoring, so far it seems that new questions do not have a negative dramatic effect
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md:Zone.Identifier b/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-18 - Data Planning 2150446ff9c980d8be61f2048a1546fa.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md b/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md
new file mode 100644
index 0000000..2be6ae7
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md
@@ -0,0 +1,59 @@
+# 2025-06-18 - Data Team Weekly
+
+# Last Week Summary
+
+### All team
+
+- We had a kick-off discussion on Data-Driven Risk Assessment. We started with EDAs.
+- DBT was upgraded to 1.9.8
+
+### Joaquin
+
+- Invoicing and Crediting report improvements, based on final requirements discussed with Nathan
+- S&P Pricing discussion happened. For ALL APIs, it was aligned we won’t do anything until 1) we have a proper client base and 2) a proper pricing structure well defined.
+- Budget reporting requirements. After checking with Nila, data from Xero was not found. It currently sits on her side to find it.
+- Stripe export documentation is in place, waiting for review from Nila. After Pablo is back, to be discussed on how to advance on the implementation.
+- Lisa (domain analyst). Currently doing the first basic SQL exercises, advancing well.
+
+### Uri
+
+- Lots of discussions on Billable Bookings:
+ - Basic Screening iterations (charging, etc): Basic screening as a standalone Program (Program with 1 Service that is Basic Screening and it’s free) is 1) available to all Hosts by default 2) applied directly to ANY listing when connecting a PMS and 3) all Listings are imported as active by default, including migrated users.
+ - New Dash user tagging as alerts. This is in place since Monday 16th.
+ - Helped Tech on investigating if billable queries were ok - seemed ok to me.
+- Prepared presentation for the first Finance and Data Monthly Workshop for TMT: Top-Down from Revenue to Issues in New Dash
+- DDRA a bit of exploration, still need to wrap up.
+- Maha & Daga as DA. Currently doing the intermediate exercises, reviewing today.
+
+### Pablo
+
+Was off this week!
+
+# Ad-hoc topics to discuss
+
+To be discussed with Pablo on the current limitations of Xero Contacts upon archiving, as Finance is asking if the issue is already resolved.
+
+Since Pablo is not here, we skip this discussion now.
+
+# Scopes for Next Week
+
+### Joaquin
+
+- Mostly focus on DDRA (EDA)
+- Keep an eye on A/B test
+- Keep working w Lisa on Domain Analyst tasks
+- Keep an eye on Data Captain tasks
+- Keep an eye on Monday as I’ll be alone
+
+### Uri
+
+- I’ll be off next week, so it’s just until this Friday
+- Mostly:
+ - Domain analysts: check current progress, if ok will start with one of the challenges
+ - Present Finance & Data workshop
+ - Keep an eye on Billable Bookings discussion
+ - Focus on DDRA
+
+### Pablo
+
+Was off this week!
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md:Zone.Identifier b/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-18 - Data Team Weekly 2160446ff9c980ec8291d85f78e3d29f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md b/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md
new file mode 100644
index 0000000..e93276d
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md
@@ -0,0 +1,32 @@
+# 2025-06-25 - Data Planning
+
+### Off Topics
+
+- Uri off this week, will be back next Monday.
+
+### Old invoicing
+
+- We’re building changes necessary to invoice properly on July 1st after the changes in Guest Products made by the guest squad
+- Also needed so finance has detailed data from Stripe
+
+### Data-Driven Risk Assessment (DDRA)
+
+- Ongoing: 3 separated explorations being carried out individually, almost finished
+- Expecting having first insights by beg. July
+
+### Domain Analysts
+
+- SQL training for Daga + Maha + Lisa
+
+### Accounting Reports
+
+- Working on Budget tracking. Budget itself was simple to figure out.
+
+### A/B test Guest Journey
+
+- Continuous monitoring, so far it seems that new questions do not have a negative dramatic effect
+- Actually, it seems we’ve somehow duplicated the conversion rate of Check In Hero. Will have confirmation soon.
+
+### Data pipelines concerns
+
+- We’re having trouble when syncing data from applications to DWH. We need Ben to help out but he’s hard to chase.
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md:Zone.Identifier b/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-25 - Data Planning 21d0446ff9c980e1b344ebe772a3b980.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md b/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md
new file mode 100644
index 0000000..1116066
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md
@@ -0,0 +1,53 @@
+# 2025-06-25 - Data Team Weekly
+
+# Last Week Summary
+
+### All team
+
+### Joaquin
+
+- EDA DDRA
+- Data alerts on resolutions
+- Invoicing and crediting reporting improvements
+- Tracking A/B test, BAU
+- Lisa completed the basic SQL challenges
+
+### Uri
+
+- Had presentation with TMT on high level business targets and how to use PBI
+- Discussed the halt of the migration of users from the old dash to new dash
+- Worked on DDRA EDA
+- Maha & Dagmara continued training
+
+### Pablo
+
+Was off this week!
+
+# Ad-hoc topics to discuss
+
+# Scopes for Next Week
+
+### Joaquin
+
+- Ideas on A/B testing
+ - Check with guest team with continue
+ - Potentially make a temporal copy of the CIH PBI report that allows filtering by A/B test variant
+- Deal with Xero Journal for budget tracking
+- Finish DDRA EDA, discuss with the team next monday
+- Finish intermediate exercises with Lisa
+- https://guardhog.visualstudio.com/Data/_workitems/edit/30007 - Add exposure of A/B notebook
+
+### Uri
+
+- Chilling (for now)
+- Discuss EDA together with the team on monday
+
+### Pablo
+
+- DDRA EDA, discuss with the team next monday
+- Add Xero Journal to Airbyte sync
+- Stripe metadata fields added to invoicing Stripe export
+- Run invoicing after guest products
+- Look at Nila’s needs docs for Stripe exports
+- Help Domain Analysts continue with challenges
+- Troubleshoot Airbyte pipeline jobs to SQL Server failing
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md:Zone.Identifier b/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-06-25 - Data Team Weekly 21d0446ff9c980e1b064fc64705671f7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md b/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md
new file mode 100644
index 0000000..b19e8c0
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md
@@ -0,0 +1,33 @@
+# 2025-07-02 - Data Planning
+
+### Off Topics
+
+- Joaquin off this week starting tomorrow; back on Tuesday
+- Changes on Program Creation Flow - Are we killing Deposit Management services?
+
+### Departure
+
+- Job desc shared. + helping prepare selection process
+- Working on handover list
+
+### Data Driven Risk Assessment
+
+- 3 independent explorations completed
+- Currently gathering insights for business-wide team
+- Currently starting to train Machine Learning models
+
+### Revenue/Billable Bookings Analysis
+
+- New analysis from Joaquin to support the investigation
+- Working on a couple of extra analysis (service popularity and migration pricing validation)
+
+### Finance
+
+- Old Dash Invoicing exported with the changes on Guest Products for Stripe - all good
+- Additional exploration on Xero data for Budget tracking
+- We’ll adapt Revenue Churn and Revenue Churn Rate targets in KPIs to align with Financial targets - currently it’s showing 3% target in KPIs while it should be 1%
+
+### Domain Analysts
+
+- Progress goes according to plan, Maha, Daga & Lisa doing first open-ended challenges
+- Expecting to close the course somewhere in mid-late July
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md:Zone.Identifier b/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-02 - Data Planning 2240446ff9c980fc8c7cd4915b55ec12.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md b/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md
new file mode 100644
index 0000000..e40f4ee
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md
@@ -0,0 +1,58 @@
+# 2025-07-02 - Data Team Weekly
+
+# Last Week Summary
+
+### Joaquin
+
+- Ideas on A/B testing
+ - Check with guest team with continue
+ - Potentially make a temporal copy of the CIH PBI report that allows filtering by A/B test variant
+- Deal with Xero Journal for budget tracking
+- Finish DDRA EDA, discuss with the team next monday
+- Finish intermediate exercises with Lisa
+- https://guardhog.visualstudio.com/Data/_workitems/edit/30007 - Add exposure of A/B notebook
+
+### Uri
+
+- Chilling (for now)
+- Discuss EDA together with the team on monday
+- Start gathering insights on EDA for DDRA
+- Billable Bookings / Revenue / etc investigation
+- Fix alerts
+
+### Pablo
+
+- DDRA EDA, discuss with the team next monday
+- Add Xero Journal to Airbyte sync
+- Stripe metadata fields added to invoicing Stripe export
+- Run invoicing after guest products
+- Look at Nila’s needs docs for Stripe exports
+- Help Domain Analysts continue with challenges
+- Troubleshoot Airbyte pipeline jobs to SQL Server failing
+- Data alerts
+
+# Ad-hoc topics to discuss
+
+- Migration Old Dash to New Dash → Not stopped but slowed down
+
+# Scopes for Next Week
+
+### Joaquin
+
+- Go to Hamburg, come back alive
+- DDRA model building
+- Continue Revenue Analysis
+ - Revenue Analysis - Old Dash to New Dash revenue generated per booking at client level: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=31720
+ - Revenue Analysis - What services/combination of services and prices are more popular per type of client in New Dash?: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=31719
+
+### Uri
+
+- DDRA Consolidate Insights
+- Chloe on Resolutions Guesty nasty clients
+- Adapt Revenue Churn targets
+- Handover
+
+### Pablo
+
+- Handover
+- Hiring help
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md:Zone.Identifier b/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-02 - Data Team Weekly 2240446ff9c980c68f88faf0087fad5e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md b/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md
new file mode 100644
index 0000000..4b7a3fb
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md
@@ -0,0 +1,58 @@
+# 2025-07-09 - Data Team Weekly
+
+# Last Week Summary
+
+### Joaquin
+
+- Go to Hamburg, come back alive
+- DDRA model building
+- Continue Revenue Analysis
+ - Revenue Analysis - Old Dash to New Dash revenue generated per booking at client level: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=31720
+ - Revenue Analysis - What services/combination of services and prices are more popular per type of client in New Dash?: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=31719
+
+### Uri
+
+- DDRA Consolidate Insights
+- Adapt Revenue Churn targets
+- Handover wip
+
+### Pablo
+
+- Hiring help
+- Handover
+ - Document API integration
+ - Cleaning all models
+ - Credentials investigation
+
+# Ad-hoc topics to discuss
+
+- Xero contacts extract manually either from Nathan with a csv or a python script
+- Do a more thorough analysis on Waiver payments
+
+# Scopes for Next Week
+
+### Joaquin
+
+- DDRA model
+- Finalize the Revenue analysis (include Waiver analysis)
+- Xero contacts (talk with Nathan)
+- A/B test finishing with Pedro and Clay
+ - Update A/B test notebook metrics to include Confident Stay
+- Finish with Lisa domain analyst studies
+- Meet up with Nila for budget report data
+
+### Uri
+
+- Data engineer job position open
+- Finish handover with Pablo
+- Check with Chloe annoying clients
+- Review with Joaquin Revenue analysis
+
+### Pablo
+
+- Interview case for data engineer.
+- Finish Xero API for Airbyte to use Uri’s account.
+- Follow up with Robinson on Airbyte issues.
+- Check tests performance.
+- Simplify use of CI.
+- Review suspicious data alert.
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md:Zone.Identifier b/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-07-09 - Data Team Weekly 22b0446ff9c98090baa0fdb0e60ca7bd.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md b/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md
new file mode 100644
index 0000000..5fc77e6
--- /dev/null
+++ b/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md
@@ -0,0 +1,14 @@
+# 2025-Q3 Data Scope Priorities
+
+### Scopes:
+
+- DDRA: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Epics?workitem=30805
+- Domain Analysts: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Epics?workitem=31088
+- Lightdash: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Epics?workitem=31090
+
+### Business as Usual:
+
+- Data Captain: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Epics?workitem=31089
+- Unplanned: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Epics?workitem=31086
+
+For ad-hoc but big scopes, create a dedicated epic. Example as of Q2: Onboarding
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md:Zone.Identifier b/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025-Q3 Data Scope Priorities 2100446ff9c98097af85f2ecb53a0cdd.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md b/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md
new file mode 100644
index 0000000..f32bdf1
--- /dev/null
+++ b/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md
@@ -0,0 +1,50 @@
+# 20250122-01 - Power BI Main Guest KPIs Bug
+
+This page is to report a bug noted on the Main Guest KPIs report in Power BI noted on the 2025-01-21, 16:00 ES Time. The bug was then fixed on 2025-01-23
+
+## Summary
+
+- Due to some formulas used on Power BI during the creation of the report, which were working correctly at the time of release, stopped working properly. Although we are not sure exactly when this happened it couldn’t have been many days earlier since Joaquin had checked on the report the week before and we know that Joan usually checks that report regularly.
+- This problem with the formulas only happens when the report is on the server, when working locally on the report all was working perfectly fine as it was previously.
+- Once I (Joaquin) noted which were the formulas that were generating problems I found a workaround to replace those visualizations that had the formula problem.
+
+## Impact
+
+Uri reported this issue after a meeting with Kayla where he was showing an example of our Main KPIs reports so we could extend this to the Accounting Managing team.
+
+Fortunately the impact wasn’t in the entire report but only some particular visuals focused on the CSAT Score which we have in a separate report as well if any user had any urgent need of it.
+
+## Timeline
+
+| Time | Event |
+| --- | --- |
+| 2025-01-22 16:00 | Uri reported this bug after a meeting with Kayla displaying the report. |
+| 2025-01-22 16:30 | Uri notified me (Joaquin) about the bug so he could that a look at what could the problem be. |
+| 2025-01-22 09:00 | Together with Uri we tried to understand the problem once we noted that locally the report was working perfectly fine and tried to do some troubleshooting on the PBI server to find the root problem. Unfortunately the details PBI gives on any issues are not very clear so this effort wasn’t really fructuous. |
+| 2025-01-22 11:00 | After multiple failed attempts of trying to fix the bug reuploading the report, I started going into detail to find the problem. I realized that there was a formula that was causing the problem (WEEKNUM gives the number of the week on the year). |
+| 2025-01-23 09:00 | I changed the measure using WEEKNUM and reuploaded the report now working correctly. |
+
+## Root Cause(s)
+
+We don’t know for sure since there weren’t any changes from our side that could have affected the report. We believe it was mainly a problem from the server in PBI or a bugged update from them that generated this issue.
+
+## Resolution and recovery
+
+- We noted the problem before any user so we had enough time to fix it before anybody was affected by it.
+- The good thing is it was only a small part of the report that wasn’t working and that data is available in another report in case of emergency.
+- We managed to fix the problem relatively quickly once we found where the problem was with a workaround so now the report is working perfectly.
+
+## **Lessons Learned**
+
+- What went well
+ - Good communication within the team to pick up the problem
+- What went badly
+ - Trying to really understand what the root problem was, the troubleshooting with PBI is not user friendly
+
+General lesson:
+
+Whenever possible try to do all calculations within DWH models so there are no created measures or the least possible amount in the PBI report. This will help mitigate problems like this.
+
+## Action Items
+
+## Appendix
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md:Zone.Identifier b/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250122-01 - Power BI Main Guest KPIs Bug 1840446ff9c980249355f34c58c4686e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md b/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md
new file mode 100644
index 0000000..0bb0a38
--- /dev/null
+++ b/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md
@@ -0,0 +1,146 @@
+# 20250124-01 - Booking invoicing incident
+
+# Booking invoicing incident
+
+Managed by: Uri
+
+## Summary
+
+- Components involved: `sh-invoicing-exporter` tool
+- Started at: 2024-11-29 14:33 UTC
+- Detected at: 2025-01-24 10:06 UTC
+- Mitigated at: 2025-01-28 16:06 UTC
+- Resolved at: 2025-02-05 14:52 UTC (after post-mortem)
+
+While working on the analysis on Booking Fees per Billable Booking Decrease, Uri noticed that two of the main clients in terms of Booking Fees were not invoiced for November and December 2024. After confirmation from Finance side and deep-dive, we noticed that something was odd with the exports generated by sh-invoicing-exporter tool.
+
+The reason is a faulty modification on the sh-invoicing-exporter tool after the latest invoicing incident on November 4th, 2024. While the change properly handles the agreed logic, it undesirably introduces an error for invoicing clients that have the billing set to Verification Start Date but do not have Verification Requests associated to their bookings. This issue did not happen prior to November’s modification.
+
+**A total of £20,777.83 was not invoiced** in the combined months of November and December 2024. The remediation carried out on January 28th 2025 accounted for a **late invoicing of 99.1%** of the missed revenue, pending payment.
+
+## Impact
+
+- On the invoicing process:
+ - November and December invoices process were carried out in apparent normality while we avoided invoicing some clients.
+ - A first estimate puts the missing revenue around 10k to 15k GBP per month missed in terms of Booking Fees. The estimated impact over 2 months would be around 20k to 30k GBP.
+ - **The final figure after the invoicing process has been re-run on January 28th 2025 is:**
+ - **Combined over November and December - Revenue £20,777.83**
+ - **Total Invoiced (late) - Revenue £20,592.76**
+ - **Total lost revenue - £185.07**
+
+ Lost Revenue comes from clients that have churned in the period (November & December) of which we have no anticipation of receiving the funds, therefore have not been invoiced.
+
+
+## Timeline
+
+All times are UTC, in 00-24h format.
+
+| Time | Event |
+| --- | --- |
+| 2024-11-29 14:33 | A new fix to handle [November invoicing incident](20241104-01%20-%20Booking%20invoicing%20incident%20due%20to%20bu%2082f0fde01b83440e8b2d2bd6839d7c77.md) is merged into sh-invoicing-tool. The relevant commit is [6ea412a9320c621560dd780a7cf49617ffdefa8e](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/6ea412a9320c621560dd780a7cf49617ffdefa8e?refName=refs%2Fheads%2Fmain). |
+| 2024-12-02 (approx.) | Pablo runs sh-invoicing-exporter as usual, and provides the Exports to Finance team. These do not contain information to bill some clients, but remains unnoticed. This is the first time the process runs with the latest modifications described [here](https://www.notion.so/Fixing-the-invoicing-incident-13d0446ff9c98056a65bc3676a34873c?pvs=21), after November 2024 invoicing incident. |
+| 2025-01-02 (approx.) | Uri runs sh-invoicing-exporter as usual, and provides the Exports to Finance team. These do not contain information to bill some clients, but remains unnoticed. |
+| 2025-01-21 13:49 | Matt reports to Data team that the Booking Fees revenue is low in December, to be investigated. |
+| 2025-01-22 16:00 (approx.) | Uri starts analysing the decrease of Booking Fees per Billable Booking, as requested by Matt. |
+| Friday 2025-01-24 10:06 | Uri raises that two clients, 20529225110 - Lavanda (UK) and 6030475449 - Hospitable Inc (US) have not been invoiced nor in November 2024 nor in December 2024. Asks Finance for confirmation from their POV. |
+| 2025-01-24 12:03 | After a sync between Nathan and Uri, it’s confirmed that Lavanda was not invoiced. The issue seems to be related to the invoicing exports delivered by Data team to Finance team, on sh-invoicing-tool. On trying to re-create the exports for Lavanda, we got empty files (no billable bookings and no booking fees) for December, November and October. The fact that Lavanda was actually invoiced correctly in October but Uri couldn’t replicate the invoice export for October points to a potential issue either on sh-invoicing-tool or the backend data. |
+| 2025-01-24 12:19 | Uri confirms we have the same problem with Hospitable. However, Uri is able to properly create the invoicing export for Home Team Vacation Rentals LLC, a client that was actually invoiced. At this stage it seems the issue might come from changes introduced in sh-invoicing-tool, and the hypothesis of backend data issues loses support.
+Ben R. is added in the loop for visibility on a potential invoicing incident, while Uri further investigates on his own. |
+| 2025-01-24 13:16 | Uri confirms this is an incident after checking the sh-invoicing-exporter tool, localising the [faulty commit](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/6ea412a9320c621560dd780a7cf49617ffdefa8e?refName=refs%2Fheads%2Fmain) and reproducing the no-results for Lavanda manually. It’s also confirmed that the issue is on sh-invoicing-tool. An incident channel is created. |
+| 2025-01-24 13:37 | Uri provides the explanation of the root cause of the incident in the channel, specifically:
+
+- The issue lies in [data-invoicing-exporter](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter). This is the tool that we use from Data side to generate the Excel files to support old dash invoicing.
+- This is a consequence, probably undesired, from the changes carried out after the latest incident on November 4th. I'm referring to the incident described [in this Notion](20241104-01%20-%20Booking%20invoicing%20incident%20due%20to%20bu%2082f0fde01b83440e8b2d2bd6839d7c77.md), the remediation explained [in this other Notion](https://www.notion.so/Fixing-the-invoicing-incident-13d0446ff9c98056a65bc3676a34873c?pvs=21) and that was discussed in this channel #invoicing-firefighting
+- The impact from my current understanding is that any Host that would have billing set to `VerificationStartDate` but does NOT have Guest Journeys associated to the bookings, will not have been billed from November onwards. This is exactly the case for Lavanda, for instance. This can happen again in the next invoicing cycle if not fixed.
+
+In detail:
+- This is the [faulty commit](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/6ea412a9320c621560dd780a7cf49617ffdefa8e?refName=refs%2Fheads%2Fmain) . Specifically, the change breaks the logic on `VerificationRequestId IS NULL`, which exactly handles the Lavanda situation of having Bookings but not Guest Journeys (note that this was covered before, while not after the merge). This explains why I was unable to replicate the extraction of October, even though it worked back in the days, because I was using today the latest version of the tool.
+- In essence, the new logic assumes it can compare the `LinkUsed` from `VerificationRequest` table to `JoinDate` from `SuperhogUser` table. For Lavanda case, the issue is that `LinkUsed` will always be null and thus the condition is always false, excluding ALL bookings. |
+| 2025-01-24 13:45 | Uri proposes a first solution to handle billing method set to `VerificationStartDate`
+- If the Booking has a Guest Journey (Verification Request) associated with, we keep the latest logic. This means, we keep using the logic implemented after November 2024 that is currently in place, by properly handling the new/returning guests
+- If the Booking has NOT a Guest Journey (Verification Request) associated with, we keep the previous logic. This means, we will charge the booking if the Guest `JoinDate` happened within the month
+
+This effectively recreates around 1.500 bookings for Lavanda in October, while we had 1.489 actually billed in October |
+| 2025-01-24 14:45 | Missing Booking Fees impact is estimated to be mostly coming from Lavanda and Hospitable (~88% in total). Some smaller accounts exist, part of those have already churned. This makes an estimate figure of 10k-15k GBP missed per month, that could be 20k-30k GBP in the 2 months. |
+| 2025-01-24 14:47 | Nathan suggests to re-do the exports for the missing clients identified in the previous estimation to get the actual figure of impact. Pending validation of logic from Ben R. before moving forward. |
+| 2025-01-24 15:22 | Uri creates a code PR in sh-invoicing-tool for Ben R. to review, with the suggested changes. This is the PR: [Ensure Billing of Bookings without Verification Requests](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/pullrequest/4185) |
+| 2025-01-24 15:49 | Ben and Uri discuss the business logic of the change, lacking understanding of why verification request would be null for those bookings. While this is actually the case, there’s a possibility that it’s linked to KYG bookings. In short, while the suggested change could properly invoice the missing accounts, it’s possible that this can ultimately generate wrong invoices for other clients that are not supposed to be invoiced. Since it’s quite confusing at the moment, Ben and Uri agree to take a look at this on Monday. A message is sent to the invoicing channel to clarify that we won’t have the exports by Friday afternoon since we need to deep-dive into the code. |
+| 2025-01-24 15:50 | Nathan acknowledges the proposed plan and comments that he’d chat with Leo to notify that we’d be preparing the missing invoices for the 2 main clients, Lavanda and Hospitable, subject to tech work.
+End of the incident management for Friday 24th January 2025. |
+| Monday 2025-01-27 11:52 | Ben and Uri sync. It is agreed that the initially proposed fix is ok to handle the cases of Bookings without Verification Requests, likely because these are coming from APIs. A follow up message is sent to proceed with the implementation and re-create November and December exports.
+In parallel, Ben and Uri discuss on New Dash exclusion for the old invoicing tool (not related to the incident). Need clarification from Finance side to proceed on New Dash regard. |
+| 2025-01-27 13:47 | Uri merges [the fix](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/2e464aa9bc420540c9536ed500a0af15b9b1e3de?refName=refs%2Fheads%2Fmain) and updates the sh-invoicing-tool in Airbyte VM to version 3.1.1. A run for November 2024 with the 2 main affected accounts (Lavanda and Hospitable) is successful in the sense that now gathers Booking Fees. |
+| 2025-01-27 14:17 | Uri triggers the full run for November 2024 on version 3.1.1. |
+| 2025-01-27 17:05 | Full run for November 2024 is finished. After checking the files, it’s clear that there has been an error somewhere as we’re observing a 10X difference vs. what was observed for January export. |
+| 2025-01-27 18:41 | Following the case of Home Team Vacation Rentals, it is observed that the issue comes from the fact that we’re missing some parenthesis in the fixed code. A rookie mistake but at least at this stage it does not seem to be something that would challenge the proposed business logic for the fix. |
+| 2025-01-27 18:55 | Uri merges [the fix](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/612d991190bca93dd662ace790c866d5d10307dc?refName=refs%2Fheads%2Fmain) and updates the sh-invoicing-tool in Airbyte VM to version 3.1.2. First tests on Lavanda, Hospitable and Home Team Vacation Rentals are successful. |
+| 2025-01-28 07:55 | Uri triggers the full run for November 2024 on version 3.1.2. |
+| 2025-01-28 09:47 | Full run for November 2024 is finished. At first glance, it looks ok. File is shared with Nathan |
+| 2025-01-28 10:06 | Uri triggers the full run for December 2024 on version 3.1.2. |
+| 2025-01-28 11:14 | Nathan confirms November files look good and that Finance will start raising them |
+| 2025-01-28 12:54 | Full run for December 2024 is finished. At first glance, it looks ok. File is shared with Nathan |
+| 2025-01-28 15:28 | Nathan finishes the missing invoices. Total invoiced revenue (late) accounts for 20,592.76 GBP over an expected 20,777.83 GBP. The difference -185,07 GBP is lost revenue due to Clients that churned in the period (Nov & Dec 2024) of which we have not anticipation of receiving the funds, therefore have not been invoiced. Expected payment received in the next week or 2. |
+| 2025-01-28 16:06 | Incident is mitigated. |
+
+## Root Cause(s)
+
+After the last invoicing incident, available here:
+
+[20241104-01 - Booking invoicing incident due to bulk UpdatedDate change](20241104-01%20-%20Booking%20invoicing%20incident%20due%20to%20bu%2082f0fde01b83440e8b2d2bd6839d7c77.md)
+
+there has been agreement on how to avoid dependency on the UpdatedDate field for price plans that are supposed to be charged on Verification Start, which has proven not to be reliable for invoicing purposes.
+
+The agreement on the new logic for invoicing is described here:
+
+[Decisions](https://www.notion.so/Decisions-13d0446ff9c980248babfd2b83b7781c?pvs=21)
+
+The changes were implemented in sh-invoicing-exporter as part of the [new version 3.1.0](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/6ea412a9320c621560dd780a7cf49617ffdefa8e?refName=refs%2Fheads%2Fmain), and these are in line with the decisions taken for November incident. However, this new logic does not contemplate the Billing of Bookings that do not have a Verification Requests linked to them, while it was considered prior to version 3.1.0. This lead to avoid billing Bookings without Verification Requests for the months of November 2024 and December 2024.
+
+## Resolution and recovery
+
+The resolution considers a mix of the logic prior and after November 2024 incident. Namely:
+
+- For Old Dashboard clients who has active price plans in period that are charged according to the Verification Start:
+ - If Booking has a Verification Request linked to it:
+ - If a Guest is New, then:
+ - Charge if Guest Join Date is within the invoicing period
+ - If a Guest is Returning, then:
+ - Charge if Link Used Date is within the invoicing period
+ - If Booking does not have a Verification Request linked to it:
+ - Charge if Guest Join Date is within the invoicing period
+
+Coding wise, this just means treating the possible Link Used Date being null and, if so, apply the same condition as Guest is New.
+
+This change was first implemented on 27th January 2025 under [version 3.1.1](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/2e464aa9bc420540c9536ed500a0af15b9b1e3de?refName=refs%2Fheads%2Fmain), followed by a fix after an unintended bug. The last working version corresponds to [version 3.1.2](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter/commit/612d991190bca93dd662ace790c866d5d10307dc?refName=refs%2Fheads%2Fmain), also from 27th January 2025.
+
+The invoicing reports for November 2024 and December 2024 were recreated under the fixed version 3.1.2 during the morning of 28th January 2025 and shared with Finance to act upon.
+
+During the afternoon of 28th January 2025, Finance (Nathan) posted the remaining invoices to account for a total of £20,592.76 of late revenue**.**
+
+## **Lessons Learned**
+
+*List of knowledge acquired. Typically structured as: What went well, what went badly, where did we get lucky*
+
+- What went well
+ - Relative consistent figures on Power BI & DWH KPIs vs. Finance’s P&L, which allowed for fast deep-dive
+ - Fast and effective cross-department collaboration once the issue was initially flagged
+ - Well documented incident management on November 2024 allowed for a quick and easy understanding on what was going on during January 2025 incident
+- What went badly
+ - Being unable to identify an issue in the critical area of invoicing for more than 2 months
+ - Being unable to notice for 2 months that two of the biggest clients were not invoiced
+- Where did we get lucky
+ - Prioritising analysis on Booking Fees per Billable Booking decrease after being one of the newly defined financial levers, which lead to discovering the invoicing issue.
+ - The incident was reported a week before the end of January, thus allowing for recovery time before extending the impact yet to a third invoicing cycle
+
+## Action Items
+
+- [x] Xero automation in Power BI - Global MoM revenue aggregations evolution
+- [x] Xero automation in Power BI - Top X clients MoM revenue aggregations evolution
+- [ ] Further automation improvements - Xero (for source of truth in actual invoiced amount) + Hubspot (for churn/onboarding/AM info) + Backend (for what we should have invoiced)
+- [ ] New Dash “Chargeable” services - To check with Ben and Nathan to ensure logic is consistent
+
+## Appendix
+
+*Miscellanea corner for anything else you might want to include*
+
+-
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md:Zone.Identifier b/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250124-01 - Booking invoicing incident 1880446ff9c9803fb830f8de24d97ebb.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md b/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md
new file mode 100644
index 0000000..9ca5ecd
--- /dev/null
+++ b/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md
@@ -0,0 +1,61 @@
+# 20250214 Retro
+
+## 🙌 What went well
+
+- Pablo had a child ♥️
+ - Managed to survive and still get a lot of work done without Pablo
+ - Good organization and staying in touch with each other
+- Office & Fun
+ - One of the best comilonas so far (upcoming vegetarian next week)
+ - New office is cool and much more convenient
+ - We have Cobee now!
+- Delivery
+ - Solved all invoicing problems and urgent fires
+ - TMT is happy with us so we can feel confident we are doing well
+ - Updated all Legacy report still in use so we won’t discontinue
+ - Nice analysis
+ - New metrics and visualizations in Main KPIs looking great
+ - A/B test had good results
+- New Dash is growing more and more
+- Company is focusing on setting a strategy and key levers
+
+## 🌱 What needs improvement
+
+- Pablo is not here
+ - Massive context switching and pressure handling
+ - Lack of Engineering capability to do engineering work + challenge architectural decisions
+ - Airbyte CPU close to 100% for several weeks went unnoticed (though nothing broke)
+ - Other teams asking things we can’t handle (or will take a lot of time) because we don’t have an engineer despite all of this being discussed previous to Pablo’s departure
+- RiF + additional people leaving:
+ - Unclear restructure, responsibilities, points of contact. Specially on Product team
+ - Unclear transition of Data team to RevOps
+ - There is a lot of insecurity as to how this will all keep on working with so many people being released or quitting
+- Delivery
+ - Old dash invoicing issue
+ - People still asking for some adhoc request not through data captain
+ - Late dbt run production failures
+ - A/B test miscommunication / lack of certainty
+- Priorities in some teams don’t seem to be very aligned yet with the recently presented north star
+
+## 💡 Ideas for what to do differently
+
+-
+
+## ✔ Action items
+
+- [x] Retro with Ben C. around planning practices (centralization is not working, changing scopes too fast, etc).
+ - [x] Quiet Tuesdays and Quiet Thursdays
+ - [x] Move calendar recurring meetings
+ - [x] Give Ben C. a heads-up
+- [ ] Read Shape-up ([https://basecamp.com/shapeup/](https://basecamp.com/shapeup/)) and discuss next retro
+- [ ] Fuse Comilona and Retro and schedule for Monday 13/01 and make retros loooonger (2H)
+- [ ] Sketch roughly formalization of Domain Analysts programme
+- [ ] Discuss and agree with Tech team on data-alerts onboarding (should they be there? who should we tag?)
+- [x] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
+- [ ] Azure DevOps checks on DWH complete PR button to ensure branch is up-to-date with master branch
+- [x] Discuss with Ben C. New Dash retrospective with PMs/Dash Squad/Data by the EOY
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md:Zone.Identifier b/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250214 Retro 19a0446ff9c980da9bfdfffa4b982bad.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md b/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md
new file mode 100644
index 0000000..05320d8
--- /dev/null
+++ b/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md
@@ -0,0 +1,81 @@
+# 20250304-01 - Verification Bulk Update
+
+Managed by Uri (Data side)
+
+## Summary
+
+- While trying to fix a support ticket in the backend on the 3rd March 2025, Tech team accidentally bulk updated the field Value of the Verification table, adding a concatenation of “Payment Validation: Waiver”.
+- This has had impact on the Power BI reports that depend directly or indirectly on this field, namely: Bookings report in SH Reporting and Guest Satisfaction on Guest Insights.
+- After a restore carried out by Tech team and a refresh carried out by Data team, the issue has been fully fixed on the morning of 5th March 2025.
+
+## Impact
+
+**On Data side:**
+
+**Superhog Reporting (production)**
+
+- **Report**: Bookings
+- **Tabs**: All
+- **Impact**: Filters of Chosen Fee/Waiver/CheckInCover/NoCover do not show the correct information.
+
+**Guest Insights**
+
+- **Report**: Guest Satisfaction
+- **Tab**: Only Guest Responses
+- **Impact**: The value shown on Selected Payment Option is wrong
+
+The rest of Power BI Reports have been working normally.
+
+**On Finance side:**
+
+Delay of ~1 day on the invoicing process that depends on SH reporting - Bookings.
+
+## Timeline
+
+| Time (UTC) | Event |
+| --- | --- |
+| 2025-03-03 13:07 | At 13:07h on 3rd March 2025, a bulk update on the Verification table, on the field Value is done in the Backend. This effectively concatenates any existing verification value with Payment Validation: Waiver. This was unintentional with the actual goal being updating a single record manually. |
+| 2025-03-04 06:00 | DWH scheduled run happens. This captures the changes as Updated Date was also modified. Thus reports depending on this Verification value will be affected. No data alert is raised in this regard, thus going unnoticed. |
+| 2025-03-04 12:40 | Gus and Lawrence communicate to Uri about the issue. Also, Gus posts a message in #all-staff. Uri checks impacts on Data side. |
+| 2025-03-04 13:36 | Uri posts a message in #data about the impacts. Mostly, SH reporting production on Booking tabs have issues on the Chosen Fee/Waiver/CheckInCover/NoCover. Additionally, Guest Satisfaction shows wrong values on the Selected Payment Option. |
+| 2025-03-04 17:50 | Lawrence confirms to Uri that the backend data has been restored normally. |
+| 2025-03-05 7:05 | Uri manually triggers the refresh of the Verification table in Airbyte. |
+| 2025-03-05 7:34 | Uri manually triggers the full-refresh on DBT for the model stg_core__verification in production. |
+| 2025-03-05 7:37 | Uri manually triggers the usual DBT run of all models. |
+| 2025-03-05 7:55 | DWH and Power BI updated. Communication sent in #data to notify users that the incident is now resolved. |
+
+## Root Cause(s)
+
+On the 3rd March 2025, Tech team was asked to back fill a payment that had been recorded in Stripe but not recorded in the Backend database.
+
+This involved:
+
+- Inserting one record in the Payment table
+- Inserting one record in the VerificationToPayment table
+- Updating the Value field in one record in the Verification table to append the description of the payment
+
+Unfortunately, the where clause of the update was incorrect and reviews missed it. So accidently the update appended the payment description to every record in the Verification table along with the updated date.
+
+The value appended was 'Payment Validation: Waiver’. No other critical data had been affected.
+
+## Resolution and recovery
+
+- From Tech side:
+ - Updating the Value and UpdatedDate in the live data with the values from the backup data for any records, where the updated date matches the date of the change.
+ - Manual check the value of any records updated since the change to assess what changes need to be made, if any.
+- From Data side:
+ - Manual refresh of the Verification table once Tech resolution has been in place and general re-run.
+
+## **Lessons Learned**
+
+- What went well
+ - Quick communication from Gus and Lawrence, knowing that this issue could have impacted Data Reporting.
+ - In-detail communication from Lawrence that helped identifying the impacts and course of action from Data side quickly.
+- What went badly
+ - No data alert was raised - Data team was unaware of the issue until Tech team communicated it.
+- Where we were lucky
+ - Considering the potential risk of a bulk update, an update on the Value of the Verification table is not impacting massively on Data side since we rely mostly on other fields. Thus the impact has been limited, but not zero.
+
+## Action Items
+
+## Appendix
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md:Zone.Identifier b/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250304-01 - Verification Bulk Update 1ad0446ff9c9806faa8bf7673e7ed6a5.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md b/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md
new file mode 100644
index 0000000..21cd21f
--- /dev/null
+++ b/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md
@@ -0,0 +1,57 @@
+# 20250319 Retro
+
+## 🙌 What went well
+
+- **We can live without Pablo**
+ - Welcoming back Pablo with nothing burning
+ - You guys have chewed through my leave wonderfully. Everyone happy and nothing has exploded. Good job!
+- **Company situation**
+ - Clearer company strategy
+ - It seems the dust has settled after the December/Jan company changes.
+ - Data team has clearly been positioned as an important one in Truvi and we're on the table to decide where the company is going
+- **Deliveries**
+ - Good advance with new reports (Churn Report, Resolutions)
+ - New Main KPIs tab with targets and performance MTD & YTD
+ - Excel training session
+ - Data-driven alert spotting and communication (HTVR)
+- **Work with other teams**
+ - Good organization between us and Matt to decide work priorities
+ - Good collabs with Alex & Nathan
+- So far good adaptation to Truvi rebranding, though there is still work to be done.
+
+## 🌱 What needs improvement
+
+- **Company org and ways of working**
+ - Lack of communication of how is the hierarchy with recent changes
+ - Many opened frontlines / context switching (despite it has reduced in the latest weeks)
+ - Company organisation after RIF is still not clear, specially on Product side
+ - Many fires still in Tech side
+ - I don't see initiatives on the table that I feel will move the needle
+- Though training sessions of excel and power bi, help we need to find ways to make them more engaging for the users
+- We need more engineering skills within the team (despite nothing really extremely was broken in Pablo’s absence)
+
+## 💡 Ideas for what to do differently
+
+- Folder and naming convention in DWH → we have CosmosDB + Core cases that end up in a Athena/E-Deposit/CIH/Resolution folder
+- Copilot paid version
+
+## ✔ Action items
+
+- [ ] Get Copilot paid version
+- [ ] Think about changing dbt model naming/structure conventions
+ - Should `int/cross` be split into… something?
+- [ ] Request TMT clarity on org. structure again
+- [ ] Read Shape-up ([https://basecamp.com/shapeup/](https://basecamp.com/shapeup/)) and discuss next retro
+- [ ] Fuse Comilona and Retro and schedule for Monday 13/01 and make retros loooonger (2H)
+- [ ] Sketch roughly formalization of Domain Analysts programme
+- [ ] Discuss and agree with Tech team on data-alerts onboarding (should they be there? who should we tag?)
+- [x] Think about how to make some kind of “PBI Homepage” where Superhog personnel can find all the PBIs that are available easily
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Potentially, also include CI checks in dbt repo
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
+- [ ] Azure DevOps checks on DWH complete PR button to ensure branch is up-to-date with master branch
+- [x] Discuss with Ben C. New Dash retrospective with PMs/Dash Squad/Data by the EOY
+
+[https://www.theodinproject.com/lessons/foundations-axes](https://www.theodinproject.com/lessons/foundations-axes)
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md:Zone.Identifier b/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250319 Retro 1bb0446ff9c980f09345f58c8517c945.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md b/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md
new file mode 100644
index 0000000..5bb9cab
--- /dev/null
+++ b/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md
@@ -0,0 +1,84 @@
+# 20250409-01 - Wrong computation on Revenue Retained metrics
+
+Managed by Uri
+
+## Summary
+
+- From April 2 to 10, 2025, a data model error caused inaccurate Total and Retained Revenue metrics in key Power BI reports, with deviations up to ±3%.
+- The issue stemmed from improper handling of metric dimensions during a KPI refactor.
+- It was fixed on April 10 with added test coverage to prevent recurrence.
+
+## Impact
+
+From 2nd of April to 10th of April 2025, the computation of Total and Retained Revenue metrics have had Data Quality issues
+
+An estimated impact over all historical data shows that:
+
+- Total Revenue was showing +0.7% more than the actual.
+- Revenue Retained was showing -2.5% less than the actual.
+- Revenue Retained Post Resolutions was showing -3.0% less than the actual.
+
+This issue was reflected in the following Power BI Apps:
+
+Business Overview
+
+- **Report**: Main KPIs
+- **Tabs**: All
+- **Impacted metrics**: Total Revenue, Revenue Retained, Revenue Retained Post Resolutions and equivalent rates.
+
+Account Management
+
+- **Report**: Churn Report & Account Margin
+- **Tabs**: All
+- **Impacted metrics**: Total Revenue, Revenue Retained, Revenue Retained Post Resolutions and equivalent rates. Limited to 34 affected Deals.
+
+The rest of Power BI Reports have been working normally.
+
+Worth mentioning that the extractions for Old Dash to New Dash migration project has *not* been affected.
+
+## Timeline
+
+| Time (UTC) | Event |
+| --- | --- |
+| 2025-04-01 13:40 | A new daily model for Total and Retained Revenue is created, with the name `int_kpis__metric_daily_total_and_retained_revenue`. This contains an error in the computation. However, while this is actually deployed in production, it is not affecting any existing reporting. Related PR [4874](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/4874). |
+| 2025-04-02 14:00 | The switch of new Revenue models from KPIs happens in production. This introduces the faulty computation in production. Related PR [4887](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/4887). |
+| 2025-04-09 13:43 | Kayla raises to Uri the fact that Revenue Retained and Revenue Retained Post-Resolutions is wrongly computed for a specific account. Indeed, Total Revenue - Host TakeHome is not equal to Revenue Retained in Account Margin report. |
+| 2025-04-09 15:39 | Error is found in the model `int_kpis__metric_daily_total_and_retained_revenue`. The fact that guest payments have more dimensions than the rest of the models is generating duplicates in a very few cases, which is hidden by the aggregation of the metric. A total of 34 deals are affected, which are shared with Kayla. |
+| 2025-04-10 06:52 | A Pull Request is created to fix the issue in the source, `int_kpis__metric_daily_total_and_retained_revenue`. Additionally it creates new data tests to ensure the issue does not happen in AM Account Margin report. Related PR [4970](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/4970). |
+| 2025-04-10 07:23 | PR is merged and a re-run of DWH is triggered. |
+| 2025-04-10 07:40 | Incident is mitigated after the successful run. |
+| 2025-04-10 13:06 | Additional test coverage implemented for Total Revenue and Revenue Retained Post-Resolutions in critical, report-facing models. |
+| | |
+
+## Root Cause(s)
+
+In the scope of KPIs refactoring, a faulty computation was introduced in the new model of `int_kpis__metric_daily_total_and_retained_revenue`. In short, the combination of dimensions was not being handled properly: Guest Revenue had additional dimensions with respect to Invoiced Revenue and Host Resolutions. This was causing very localised duplicated instances of Invoiced Revenue and Host Resolutions. Since the final metrics were aggregated by summing, this duplicates were hidden within an internal CTE of the model and no duplicity alert was ever raised.
+
+This issue went unnoticed during the development despite using audit tools for refactoring purposes. Likely, the root cause is a human mistake on checking such audits, as posterior checks show that the issue was there at the time this new model was put in production.
+
+## Resolution and recovery
+
+Resolution has focused mostly on conducting a first aggregation of Guest Revenue data for the necessary dimensions before joining with Invoiced Revenue and Host Resolutions metrics.
+
+Additionally, the same aggregation has been handled for Invoiced Revenue and Host Resolutions, despite is not really needed. This is just to ensure that in the event of further dimensions being created for these models, the outcome of the Total and Retained Revenue would still prevail.
+
+Testing coverage has been increased in downstream models, report-facing, to ensure that such an issue is properly flagged.
+
+After 2025-04-10 07:40 UTC, all affected instances on Power BI have been recovered normally.
+
+## **Lessons Learned**
+
+- What went well
+ - Kayla effectively raising the issue on checking the Account Margin report
+ - Our development flow and version control allowed us to quickly understand when the issue started, what was the root cause, reproduce it and remediate it
+- What went badly
+ - Issue went unnoticed for a week, without any alerting in place.
+ - Development deficiencies on the refactor that was unable to detect this issue.
+- Where we were lucky
+ -
+
+## Action Items
+
+- [ ] Retrospect all together how can we perform refactors in the `dbt` project with more confidence and less mistakes
+
+## Appendix
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md:Zone.Identifier b/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250409-01 - Wrong computation on Revenue Retaine 1d10446ff9c980e0b6d3e52b40879b68.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md b/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md
new file mode 100644
index 0000000..5a27a6b
--- /dev/null
+++ b/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md
@@ -0,0 +1,60 @@
+# 20250414 Old Dash Invoicing - Exclude New Dash data
+
+# Problem
+
+On Wednesday 9th April, an issue concerning Old Dash invoicing was raised in the form of a Data Request. It appears that `34647958964 - Sedona RocksVacation Rentals` have had New Dash Bookings appearing in the Old Dash invoicing exports that Data Team generates.
+
+According to Lisa, a similar issue happens at least with 4 other accounts.
+
+# Current State
+
+No exclusion of New Dash data is handled in the Invoicing Exporter tool.
+
+Current invoiceable items in Old Dash Invoicing are:
+
+- Bookings
+- Listings
+- Verifications
+- Waivers (for Host-takes-risk, revenue share)
+
+# Final Solution
+
+For Old Dash invoicing:
+
+**Modify:**
+
+- Bookings that appear in `BookingToProductBundle` need to be excluded.
+
+**Investigate:**
+
+- To verify how to handle Verification Fees - although there’s not too many accounts.
+
+**Keep as is:**
+
+- Users that migrate to New Dash are NOT supposed to be excluded.
+- Listing logic remains the same. Migrated accounts would naturally show a listing fee of 0.
+- Waiver logic remains the same. This would need to be likely excluded in the future for host-takes-risk revenue share once implemented in Hyperline for New Dash accounts.
+
+Lastly, Data Team will not re-do previous invoicing exports. These will be handled manually by Finance team. The final solution is expected to be in place for the next invoicing cycle, beginning May 2025.
+
+# Solution Discussion
+
+1. Ideally, exclude Users that have been migrated to New Dash
+ - No invoice file would be generated, thus all items would be excluded.
+ - What happens on the transition of a User being migrated from Old Dash to New Dash?
+
+ → It cannot be done by excluding users as proposed here because:
+
+ - It would remove Bookings that are not migrated to New Dash that might still need billing.
+ - It would remove Waivers. It’s not clear though that the old PayAway logic would still apply for New Dash users as these should rely on the service (Basic Waiver vs. Waiver Plus). To be handled differently, as it needs to be implemented in Hyperline, as a separated line of work. We keep as is.
+2. If not possible, how can we exclude:
+ - Bookings: if a Booking appears in `BookingToProductBundle`, exclude.
+ - It was also suggested to use IsLegacy but wouldn’t work:
+ - ~~For migrated users:~~
+ - ~~IsLegacy = False → New Dash~~
+ - ~~IsLegacy = True → Old Dash~~
+ - ~~for NOT migrated users:~~
+ - ~~IsLegacy = False → Old Dash~~
+ - Listings: when a user is migrated to new dash, fee is set to 0 - Nothing to do on Data side.
+ - Verifications: to be checked on Data side → 7 or 8 accounts; if it’s too much work it can be done manually on Finance side.
+ - Waivers: as is, to be handled separately in the future.
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md:Zone.Identifier b/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250414 Old Dash Invoicing - Exclude New Dash dat 1d50446ff9c9807aa1edcca0c9e97082.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md b/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md
new file mode 100644
index 0000000..eb64b14
--- /dev/null
+++ b/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md
@@ -0,0 +1,55 @@
+# 20250505 Retro
+
+## 🙌 What went well
+
+- **Org**
+ - It seems like dust is finally settling after RIF
+ - New DevOps project to track Data Quality issues
+- **Deliveries and WIP**
+ - Nice project on the pipeline: data-driven flagging
+ - +1
+ - Good work with A/B testing
+ - +1
+ - Very nice updates in our reports (KPI targets, growth score, new dash adoption, resolutions payments,..)
+ - Domain analysts growing
+- **Company**
+ - Last months were much better than previous ones from a financial perspective
+- **Pura vida**
+ - Loved Nepali food and all our comilonas
+ - + infinite
+ - Things are chill
+ - Some holidays!
+- A lot less data alerts recently
+
+## 🌱 What needs improvement
+
+- **Disorganization, misalignment**
+ - Feeling like current lines of work are not really impactful
+ - I feel company has become very disorganized. Management runs in circles in meetings, ICs do business as usual in complete disconnect.
+ - Being pulled in in some mostly waste of time meetings
+ - API team feels a little lost, it’s lacking a clear leader and communication
+- **… which is causing …**
+ - Things are too chill, routine is kicking in
+ - We’re losing communication output (Data News not being sent on time, didn’t do training sessions on new reports, unclear output of illustrations A/B test)
+ - And also quarterly planning and lookback
+- **Technical**
+ - The goddamn CI server is a headache
+- Missing someone like Joan who kept us more in the loop with the guest team
+- Finance new tracking of Host Resolutions broke Business Targets for a few days
+- Massive involvement on HTVR invoicing discrepancies that should not really be on our scope
+
+## 💡 Ideas for what to do differently
+
+- Should we take the chance of the team’s anniversary to fully review our ways of working, rituals, systems, etc?
+- Self-run with lines of work that we can lead independently
+
+## ✔ Action items
+
+- [ ] Think about changing dbt model naming/structure conventions
+ - Should `int/cross` be split into… something?
+- [ ] Request TMT clarity on org. structure again
+- [ ] Read Shape-up ([https://basecamp.com/shapeup/](https://basecamp.com/shapeup/)) and discuss next retro
+- [ ] Document all the config references (URLs, DB connection strings, credentials, etc)
+- [ ] Agree with Ben R. on a different way to manage permissions PBI
+- [ ] Make a cleaning day for Data Catalogue docs
+- [ ] Document existing invoicing processes, not just new ones
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md:Zone.Identifier b/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250505 Retro 1ea0446ff9c98035943ffc3c3f4a6306.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md b/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md
new file mode 100644
index 0000000..f43acb5
--- /dev/null
+++ b/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md
@@ -0,0 +1,125 @@
+# 20250605-01 - Overrepresentation of Host Resolutions Payments
+
+Managed by Uri
+
+## Summary
+
+A data bug led to **Host Resolutions Payments being overrepresented** across various reports. This inflated values for related KPIs and metrics, particularly affecting:
+
+- **Business Overview:** Overstated Host Resolution Payouts, understated revenue retained post-resolutions.
+- **Accounting Reports:** Overstated Host Resolution Payments.
+- **Account Management Reports:** Skewed margins and growth data.
+
+The bug caused a **net overstatement of ~£7.2K**, with the majority of the impact (~£6.5K) occurring in the first five days of June.
+
+**Timelines**
+
+- Started at: 28th April 2025
+- Detected at: 5th June 2025, 11:02 UTC
+- Acknowledged at: 5th June 2025, 11:06 UTC (4 minutes post-detection)
+- Mitigated at: 5th June 2025, 11:38 UTC (36 minutes post-detection)
+- Resolved at: 5th June 2025, 11:39 UTC (37 minutes post-detection)
+
+## Impact
+
+Host Resolutions Payments were showing higher values than the reality. This affected directly Host Resolution Payments details, aggregations and dependant metrics, which included Revenue Retained Post-Resolutions.
+
+### Temporal affectation
+
+The over representation of Host Resolutions Payments was effectively reported in Power BI between 28th April 2025 and 5th June 2025.
+
+The majority of the impact was during June, accounting for 6.5K GBP. The issue went unnoticed until 5th June, mostly because the relative impact of the mismatch vs. the days in month was much more evident than in previous months.
+
+Detail is available in this table, extracted on June 5th:
+
+| Month | Over represented amount (GBP) | GBP/day |
+| --- | --- | --- |
+| April 2025 | -114 | -3.8 |
+| May 2025 | -562 | -18.1 |
+| June 2025 | -6,520 | -1,304 (only first 5 days) |
+
+### Reporting affectations
+
+In **Business Overview**:
+
+- Main KPIs:
+ - Business Targets - Host Resolution Payouts was showing higher values.
+ - The following metrics available in the report were showing **higher** values, in absolute terms:
+ - Host Resolutions Payouts
+ - Host Resolutions Payouts per Booking Created
+ - Host Resolutions Payment Count
+ - Host resolutions Payment Count per Created Booking Rate
+ - The following metrics available in the report were showing **lower** values, in absolute terms:
+ - Revenue Retained Post-Resolutions
+ - Revenue Retained Post-Resolutions Rate
+
+In **Accounting Reports**:
+
+- Resolutions Host Payments: all tabs were affected, Host Resolutions Payments were showing **higher** values in absolute terms.
+
+In **Account Management Reporting**:
+
+- Account Margin: Similar impact as for Main KPIs.
+- Account Performance: Similar impact as for Main KPIs.
+- Churn Report: Revenue Retained Post-Resolutions was showing **lower** values.
+- Account Growth: Revenue Retained Post-Resolutions was showing **lower** values.
+
+## Timeline
+
+| Time (UTC) | Event |
+| --- | --- |
+| 2025-04-25 | A new model called `int_xero__host_resolutions_payments` is released in DWH, which reads from Xero. This aims to unify Host Resolutions Payments from two sources: Bank Transactions and Credit Notes. Prior to this, only Bank Transactions were considered for Host Resolutions Payments. Linked Pull Request [!5015](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5015). |
+| 2025-04-28 | New logic is applied for KPIs purposes, effectively affecting Main KPIs and Account Management in Power BI. Linked Pull Request [!5071](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5071). |
+| 2025-04-29 | New logic is applied for Xero reporting, effectively affecting Resolutions Host Payments in Power BI. Linked Pull Request [!5083](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5083). |
+| 2025-06-04 06:41 | Chloe messages Uri saying that there seems to be a bug in the dial of Host Resolution Payouts in Business Targets (Main KPIs). Uri explains how the projection is based on run-rate, and that there’s no bug in this computation.
+However, the 15K GBP in Actual payouts are indeed bugged, as figures are higher than what should have been in reality. This however goes unnoticed. |
+| 2025-06-05 11:02 | Chloe Lorusso rightly points out that Resolution Payouts are incorrect, as some lines seem duplicated in Power BI → Accounting Reports → Resolutions - Host Payments → Details |
+| 2025-06-05 11:06 | Uri starts the investigation at Data side. |
+| 2025-06-05 11:14 | Issue is **identified**: Resolution payments as Credit Notes are not being filtered by status. This means that status such as DELETED are considered, which result in over consideration of Host Resolutions amount paid. |
+| 2025-06-05 11:36 | A fix is applied in `int_xero__host_resolutions_payments` to only include status AUTHORISED and PAID for Credit Notes. No change is applied in Bank Transactions, as it already considered only AUTHORISED. |
+| 2025-06-05 11:38 | Incident is **mitigated**: any downstream dependant of `int_xero__host_resolutions_payments` is run in DWH production, manually, from local command. |
+| 2025-06-05 11:39 | Incident is **resolved**: Bugfix has been pushed to the master branch of DWH in [commit 08678427](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/08678427ad81fc8733574c71c0b1877bf78bc1c5?refName=refs%2Fheads%2Fmaster). |
+
+## Root Cause(s)
+
+The root cause was a bug within a DWH model from Xero that combines Host Resolutions Payments appearing as Bank Transactions and Credit Notes, namely `int_xero__host_resolutions_payments`. This model was put in production late April 2025.
+
+Prior to the existence of this unified model, only Bank Transactions were considered. Upon adding Credit Notes, these were only being filtered by document type corresponding to `ACCRECCREDIT`, while no filter existed on document status. This means that Credit Notes with `DELETED` and `VOIDED` status were also considered.
+
+## Resolution and recovery
+
+Resolution starts on June 5th, at around 11:06 UTC, when Uri deep-dives into the Host Resolution Payments line items that Chloe raised as duplicated a few minutes before. Data team gets notified of an ongoing incident.
+
+Taking a few examples from DWH data, it’s clear that the issue lies in not filtering out Credit Notes with `DELETED` status.
+
+At the moment of applying the fix, it’s noticed that a status named `VOIDED` also exists. Due to low amount of volume and the fact we never actually considered this status before for other Xero models, Uri unilaterally decides to discard it as well, despite it’s worth to check with Finance if these should be included. The detail of Host Resolutions Payments as Credit Notes can be seen below:
+
+| Document Status | Amount Paid (GBP) |
+| --- | --- |
+| PAID | -116,815 |
+| AUTHORISED | -8,613 |
+| DELETED | -6,803 |
+| VOIDED | -391 |
+
+Thus, the fix effectively only includes Credit Notes with status `PAID` and `AUTHORISED`.
+
+Fix is tested locally and then run in DWH production. Once finished, Uri refreshes the Power BI - Host Resolutions Payments and sees how the buggy lines have disappeared. A message is sent back to Chloe and the Data Team, and the bugfix is pushed to production.
+
+## **Lessons Learned**
+
+- What went well
+ - Fast response time
+ - Upon clear evidence, fast mitigation
+- What went badly
+ - Issue went unnoticed for more than a month
+ - No automatic alert detected it
+ - Impacted critical reporting such as Main KPIs (Business Targets)
+ - Chloe already raised the possibility of a bug one day in advance and was not deeply investigated
+- Where we were lucky
+ - Impact was localised mostly in June despite the bug being in the code for several weeks
+
+## Action Items
+
+- [ ] Clarify `VOIDED` status treatment with Finance
+
+## Appendix
\ No newline at end of file
diff --git a/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md:Zone.Identifier b/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/20250605-01 - Overrepresentation of Host Resolutio 2090446ff9c9804ca74be8bfae70fa64.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md b/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md
new file mode 100644
index 0000000..d285a53
--- /dev/null
+++ b/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md
@@ -0,0 +1,3 @@
+# 2025Q1
+
+[Q1 Data Scopes proposal](Q1%20Data%20Scopes%20proposal%201570446ff9c9800d9063d448c71aeea1.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md:Zone.Identifier b/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025Q1 1570446ff9c980dea9cbf31bb603e09e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md b/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md
new file mode 100644
index 0000000..f39a768
--- /dev/null
+++ b/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md
@@ -0,0 +1,3 @@
+# 2025Q3
+
+[2025-Q3 Data Scope Priorities](2025-Q3%20Data%20Scope%20Priorities%202100446ff9c98097af85f2ecb53a0cdd.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md:Zone.Identifier b/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/2025Q3 2100446ff9c980c8b55ae57d39836c07.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md b/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md
new file mode 100644
index 0000000..aa3d6d5
--- /dev/null
+++ b/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md
@@ -0,0 +1,41 @@
+# Add a new device to the Data VPN
+
+## Create a new key pair
+
+You can create private keys on a bash terminal with `wg genkey`
+To get the related pubkey, you can run `wg pubkey `
+
+## Add entry in the jumphost config file
+
+In the jumphost server, modify `/etc/wireguard/wg0.conf` and add a new entry for the peer following this structure:
+
+```bash
+[Peer]
+# Probably leave a comment to inform who this is for
+PublicKey =
+AllowedIPs = 192.168.70.XXX/32 # Replace XXX with the an available value
+```
+
+Make sure to not generate IP collisions: each `Peer` entry should have a unique `AllowedIPs` value that no other entry is using.
+
+Finally, restart the server so that changes take effect with: `sudo systemctl restart wg-quick@wg0.service`
+
+You can verify that Wireguard is running properly again with: `sudo systemctl status wg-quick@wg0.service`
+
+## Provide user with their private configuration and keys
+
+Next, provide the user with this block of configuration so they can create an entry in their local Wireguard client:
+
+```bash
+[Interface]
+PrivateKey =
+Address = 192.168.70.1/32
+DNS = 192.168.69.1
+
+[Peer]
+PublicKey = bKr79c5XbzudWeUjiwXcxsy1mrrEnrO4xSrNAUZv2GE= # Jumphost public key goes here. This is a valid value as I'm writing this guide, but it might change in the future!
+AllowedIPs = 192.168.69.1/32, 10.69.0.0/24, 52.146.133.0/24
+Endpoint = 172.166.88.95:52420
+```
+
+Besides this config snippet, also provide the public and private keys to the user and instruct them to keep them stored in their password manager.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md:Zone.Identifier b/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Add a new device to the Data VPN 1350446ff9c9800abb08ec761bf8ad7f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md b/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md
new file mode 100644
index 0000000..88c1107
--- /dev/null
+++ b/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md
@@ -0,0 +1,115 @@
+# Analysis: Potential Guest Revenue Loss – Airbnb Bookings
+
+### **Objective**
+
+To assess the potential impact on Guest Revenue and Guest Revenue Retained if we lose all income generated by Waiver and Deposit from all bookings with source Airbnb.
+
+### **Context**
+
+- Big portion of our bookings come from Airbnb.
+- A potential upcoming change could lead to a drop in revenue from this channel.
+- This analysis quantifies the potential loss.
+
+### **Methodology**
+
+- Timeframe analysed: Mar 2024 – Feb 2025
+- Data source: DWH
+- Key metrics analysed:
+ - Total Revenue
+ - Revenue Retained
+ - Guest Revenue
+ - Number of Bookings
+- Segmentation by source: Airbnb over Total.
+
+### **Key Findings**
+
+- Airbnb’s Guest Revenue (Waiver and Deposit) accounted for 23.0**%** of the Total Revenue in the selected period.
+- Estimated **monthly revenue at risk**: **£98,456.**
+- Airbnb’s Guest Revenue Retained (Waiver and Deposit) accounted for 14.4**%** of the Total Revenue Retained in the selected period.
+- Estimated **monthly revenue retained at risk**: **£38,243.**
+- Supporting metrics:
+ - Airbnb’s Guest Revenue Retained represents 32.2% of Truvi’s Total Guest Revenue Retained.
+ - The monthly amount of Host Revenue generated by Airbnb bookings amounts to **£60,213**
+
+### Data and analysis
+
+All data used was extracted from the DWH.
+
+- Query 1: All bookings from PMS source in the last year (March 2024 to February 2025) with their guest product and payments
+
+ ```sql
+ SELECT DISTINCT
+ b.id_booking,
+ b.created_date_utc,
+ vp.verification_payment_type,
+ vp.amount_without_taxes_in_gbp,
+ CASE
+ WHEN vp.verification_payment_type = 'Waiver' THEN vp.superhog_fee_without_taxes_in_gbp
+ ELSE vp.amount_without_taxes_in_gbp
+ END AS superhog_fee_without_taxes_in_gbp,
+ it.display_name AS PMS,
+ b.id_user_host,
+ uh.id_deal,
+ uh.email,
+ uh.company_name,
+ b.id_booking_source,
+ CASE
+ WHEN b.id_booking_source = 1 THEN 'Unknown'
+ WHEN b.id_booking_source = 2 THEN 'Manual'
+ WHEN b.id_booking_source = 3 THEN 'Airbnb'
+ WHEN b.id_booking_source = 4 THEN 'Vrbo'
+ WHEN b.id_booking_source = 5 THEN 'BookingDotCom'
+ WHEN b.id_booking_source = 6 THEN 'Agoda'
+ WHEN b.id_booking_source = 7 THEN 'Marriott'
+ WHEN b.id_booking_source = 8 THEN 'OneStepLink'
+ ELSE 'Other'
+ END AS booking_source
+ FROM int_core__bookings b
+ INNER JOIN int_core__user_host uh
+ ON uh.id_user_host = b.id_user_host
+ INNER JOIN staging.stg_core__integration i
+ ON b.id_user_host = i.id_superhog_user
+ AND b.created_at_utc BETWEEN i.created_at_utc AND COALESCE(i.deleted_at_utc, '2050-12-31')
+ INNER JOIN staging.stg_core__integration_type it
+ ON it.id_integration_type = i.id_integration_type
+ LEFT JOIN int_core__verification_payments_v2 vp ON vp.id_verification_request = b.id_verification_request AND vp.payment_status = 'Paid'
+ WHERE b.verification_request_booking_source = 'PMS'
+ AND b.created_date_utc between '2024-03-01' and '2025-02-28'
+ AND b.is_duplicate_booking IS FALSE;
+
+ ```
+
+- Query 2: Revenue numbers by deal in the last year (March 2024 to February 2025)
+
+ ```sql
+ SELECT mam.id_deal,
+ mam.main_deal_name,
+ sum(COALESCE(created_bookings, 0)) AS total_bookings,
+ sum(COALESCE(total_revenue_in_gbp, 0)) AS total_revenue,
+ sum(COALESCE(total_guest_revenue_in_gbp, 0)) AS total_guest_revenue,
+ sum(COALESCE(xero_operator_net_fees_in_gbp, 0)) AS invoiced_operator_revenue,
+ sum(COALESCE(xero_apis_net_fees_in_gbp, 0)) AS invoiced_api_revenue,
+ sum(COALESCE(revenue_retained_in_gbp , 0)) AS revenue_retained,
+ sum(COALESCE(waiver_payments_in_gbp, 0)) AS waiver_payments,
+ sum(COALESCE(deposit_fees_in_gbp, 0)) AS deposit_payments
+ FROM reporting.monthly_aggregated_metrics_history_by_deal mam
+ WHERE date BETWEEN '2024-03-01' AND '2025-02-28'
+ GROUP BY 1, 2
+ ORDER BY 4 DESC
+
+ ```
+
+
+To assess the potential impact of losing all Guest Revenue from Airbnb bookings, we extracted **all bookings where `booking_source = 'PMS'` from March 2024 to February 2025**. This timeframe was chosen to ensure we had **complete and finalized revenue data**.
+
+We then:
+
+- **Filtered the bookings by source = Airbnb**, and aggregated the **number of bookings and total Guest Revenue** in a pivot table, grouped by deal.
+- **Extracted total revenue across all deals**, regardless of source, for the same period.
+- **Cross-referenced Airbnb revenue with total deal revenue** to calculate the **proportion of revenue coming from Airbnb**.
+- This allowed us to **estimate the potential impact** of losing all Guest Revenue generated from Airbnb bookings.
+- We also included an estimate of the potential **Host Revenue (Invoiced Operator Revenue)** that could be affected by the loss of Airbnb bookings. Since we don't have direct data to calculate the exact impact, we **approximated it based on the proportion of bookings coming from Airbnb relative to the total number of bookings**.
+
+## Excel Notebook
+
+[Airbnb Payments.xlsx](Airbnb_Payments.xlsx)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md:Zone.Identifier b/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Analysis Potential Guest Revenue Loss – Airbnb Boo 1d70446ff9c98085be28f77ba41e7e9f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md b/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md
new file mode 100644
index 0000000..08dd50b
--- /dev/null
+++ b/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md
@@ -0,0 +1,60 @@
+# Backing up WG Hub config
+
+The idea is to automatically copy the Jumphost server WG config to a local device, just in case. Copying gets done through SSH.
+
+Because we want this happening automatically, we need to do some adjustments to automatically enter the SSH key passphrase.
+
+## Dealing with passphrase
+
+Run these commands and enter the passphrase when prompted.
+
+```bash
+eval "$(ssh-agent -s)"
+ssh-add ~/.ssh/superhog-data-general-ssh-prd # I'm assuming this is your path to the key. If it isn't adjust.
+echo "export SSH_AUTH_SOCK=$SSH_AUTH_SOCK" > ~/.ssh/agent_env
+echo "export SSH_AGENT_PID=$SSH_AGENT_PID" >> ~/.ssh/agent_env
+```
+
+## The actual script
+
+Run this in your terminal to create the backup script:
+
+```bash
+cat << EOF > backup_wg.sh
+#!/bin/bash
+source /home/$USER/.ssh/agent_env
+ssh azureuser@jumphost-prd.prd.data.superhog.com -i /home/$USER/.ssh/superhog-data-general-ssh-prd 'sudo cat /etc/wireguard/wg0.conf' > /home/$USER/wg_server_backup.conf
+EOF
+```
+
+Now test that it works by running in your terminal:
+
+```bash
+chmod 700 backup_wg.sh
+./backup_wg.sh
+
+# Is the file there?
+ls -l | grep wg_server_backup
+
+# Let's print the first line, which usually should simply read "[Interface]"
+head -n 1 wg_server_backup.conf
+```
+
+Make sure this works before scheduling.
+
+## Scheduling
+
+Run this to schedule it to run a few times per day. Hopefully your laptop will be active during some of those times:
+
+```bash
+BACKUP_COMMAND="0 9,12,15,18 * * * /home/$USER/backup_wg.sh"
+(crontab -u $USER -l; echo "$BACKUP_COMMAND" ) | crontab -u $USER -
+```
+
+Your schedule is now ready. Feel free to wait until one of those times is hit to check if the backup file gets created.
+
+# Restoring the backup
+
+Simply edit the Jumphost server file `/etc/wireguard/wg0.conf` to add the contents of the backup.
+
+You would then restart WG in the jumphost with `sudo systemctl restart wg-quick@wg0.service`
\ No newline at end of file
diff --git a/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md:Zone.Identifier b/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Backing up WG Hub config 2260446ff9c9808b8cd9ecc144ce7106.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md b/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md
new file mode 100644
index 0000000..c6e797c
--- /dev/null
+++ b/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md
@@ -0,0 +1,73 @@
+# Budget Report
+
+## ✅ Objective
+
+Automate the manual Excel-based departmental spend reporting process using Power BI, allowing for comparison between actuals and budgets across departments, with drill-down capability and currency conversion.
+
+---
+
+## 📁 Data Sources
+
+### 1. **Budget 2025–2026 (One-Off Upload)**
+
+- Finalized and static prepared by finance team.
+- Contains:
+ - Cost categories per department
+ - Monthly figures for FY 2025–2026
+
+### 2. **Monthly Actuals from Xero**
+
+- Exported monthly across all Truvi entities.
+- Raw data may contain inaccuracies in:
+ - Cost category
+ - Department
+- Mapping corrections that we can apply similar to the accounting aggregation levels:
+ - Cost category
+ - Mapping to Month End Level 1
+ - Mapping to Month End Level 2
+ - Department
+
+---
+
+## 🧩 Key Requirements for the Power BI Report
+
+### 🔍 Filters (Slicers)
+
+- **Month End Date**
+- **Department**
+
+### 📊 Report Layout (Table Format)
+
+| Apr Actual | Apr Budget | Variance to budget (£) | Variance to budget (%) | Last Month Actual | Variance to Last Month (£) | Variance to Last Month (%) | YTD Actuals | YTD Budget | Variance (£) | Variance to budget (%) |
+| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
+
+---
+
+## 🧾 Drilldown Structure
+
+### Level 1: **Month End Level 1**
+
+E.g., **Entertaining**
+
+### Level 2: **Month End Level 2**
+
+- Client Entertaining
+- Staff Entertaining
+
+➡️ Users should be able to **drill down** from Level 1 to Level 2.
+
+---
+
+## 💷 Currency Conversion
+
+- All amounts should be displayed in **GBP (excluding VAT)**
+- Use **monthly exchange rates from Xero**
+- Conversion based on **posting month**
+
+---
+
+## 📎 Attachments / Files
+
+- ✅ Budget 2025–2026 File
+
+ [Truvi Group Dept Analysis 2025-26 PBI Test.xlsx](Truvi_Group_Dept_Analysis_2025-26_PBI_Test.xlsx)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md:Zone.Identifier b/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Budget Report 1f40446ff9c98062b778d1ad809dab13.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md b/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md
new file mode 100644
index 0000000..0d654d4
--- /dev/null
+++ b/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md
@@ -0,0 +1,53 @@
+# Business KPIs Documentation
+
+This page aims to provide a clear documentation of the business KPIs initiative within the DWH world.
+
+# Introduction
+
+**Business KPIs** is an initiative started around end of May 2024. With already some figures around sources of revenue that were being finalised by the end of that month, and some already existing but dispersed, not fully qualitative legacy reports, the Data team started to centralise and lead the KPI definition and exposition for the top management team first, and the rest of the company later.
+
+# Delivery strategy
+
+During the first stages, the goal was to provide batches of deliverables in time-framed periods based on 1) figures feasible to deliver, 2) with a certain “good” level of data quality and 3) that could help provide a proper understanding of the business direction.
+
+All these batches have been discussed and agreed on with the top management team (TMT), and can be accessed here:
+
+[Reporting Needs ](https://www.notion.so/Reporting-Needs-afaf4d5384764023a246d6cf7de201b4?pvs=21)
+
+Additionally, the abovementioned page contains many more details on the metrics, dimensions and ways to visualise KPIs. Keep in mind that the definitions and extend of what’s available might not be always up-to-date or at least fully complete since it’s an iterative process on constant evolution.
+
+# Useful links
+
+Find below some Notion pages that at some point have been related to the Business KPIs initiative:
+
+## Business-oriented
+
+[Revenue naming - 2024-09-30](Revenue%20naming%20-%202024-09-30%201110446ff9c980cfaf13ec0121b9c2c7.md)
+
+[Listing & Deal lifecycle - 2024-07-29](Listing%20&%20Deal%20lifecycle%20-%202024-07-29%204dc0311b21ca44f8859969e419872ebd.md)
+
+## Tech-oriented
+
+[KPIs Refactor -2025-04-01](KPIs%20Refactor%20-2025-04-01%201c70446ff9c9800a8aa2d9706416b38d.md)
+
+[Technical Documentation - 2024-11-12](Technical%20Documentation%20-%202024-11-12%2013c0446ff9c980719db3f4c420995f70.md)
+
+[KPIs Refactor - Let’s go daily - 2024-10-23](KPIs%20Refactor%20-%20Let%E2%80%99s%20go%20daily%20-%202024-10-23%201280446ff9c980dc87a3dc7453e95f06.md)
+
+[(Legacy) Technical Documentation - 2024-09-20]((Legacy)%20Technical%20Documentation%20-%202024-09-20%201070446ff9c980a4a850f159d4f55f8b.md)
+
+[Exploration - MetricFlow - 2024-08-06](Exploration%20-%20MetricFlow%20-%202024-08-06%20f45d91500ad7433d9ff4e094b8a5f40b.md)
+
+[(Legacy) Technical Documentation - 2024-08-05]((Legacy)%20Technical%20Documentation%20-%202024-08-05%20aa7e1cf16b6e410b86ee0787a195be48.md)
+
+[Refactoring Business KPIs - 2024-07-05](Refactoring%20Business%20KPIs%20-%202024-07-05%205deb6aadddb34884ae90339402ac16e3.md)
+
+## Data quality assessments
+
+[Data quality assessment: DWH vs. Finance revenue figures](Data%20quality%20assessment%20DWH%20vs%20Finance%20revenue%20fig%206e3d6b75cdd4463687de899da8aab6fb.md)
+
+[Data quality assessment: Billable Bookings](Data%20quality%20assessment%20Billable%20Bookings%2097008b7f1cbb4beb98295a22528acd03.md)
+
+[Data quality assessment: Guest Journeys with Payments but that are not completed (or not even started)](Data%20quality%20assessment%20Guest%20Journeys%20with%20Paymen%205a34141e4f2f4267a9ce290101179610.md)
+
+[Data quality assessment: Verification Requests with Payment but without Bookings](Data%20quality%20assessment%20Verification%20Requests%20with%201350446ff9c980f9b0bdea31eb03bac4.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md:Zone.Identifier b/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Business KPIs Documentation 292c74f608eb46d8b9887e239046cc87.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md b/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md
new file mode 100644
index 0000000..42deb68
--- /dev/null
+++ b/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md
@@ -0,0 +1,306 @@
+# Busy man’s guide to optimizing dbt models performance
+
+You have a `dbt` model that takes ages to run in production. For some very valid reason, this is a problem.
+
+This is a small reference guide on things you can try. I suggest you try them from start to end, since they are sorted in a descendent way by value/complexity ratio.
+
+Before you start working on a model, you might want to check [the bonus guide at the bottom](Busy%20man%E2%80%99s%20guide%20to%20optimizing%20dbt%20models%20performa%20b0540bf8fa0a4ca5a6220b9d8132800d.md) to learn how to make sure you don’t change the outputs of a model while refactoring it.
+
+If you’ve tried everything you could here and things still don’t work, don’t hesitate to call Pablo.
+
+## 1. Is your model *really* taking too long?
+
+> Before you optimize a model that is taking too long, make sure it actually takes too long.
+>
+
+The very first step is to really assess if you do have a problem.
+
+We run our DWH in a Postgres server, and Postgres is a complex system. Postgres is doing many things at all times and it’s very stateful, which means you will pretty much never see *exactly* the same performance twice for some given query.
+
+Before going crazy optimizing, I would advice running the model or the entire project a few times and observing the behaviour. It might be that *some day* it took very long for some reason, but usually, it runs just fine.
+
+You also might want to do this in a moment where there’s little activity in the DWH, like very early or late in the day, so that other user’s activity in the DWH don’t pollute your observations.
+
+If this is a model that is already being run regularly already, we can also leverage the statistics collected by the `pg_stat_statements` Postgres extension to check what are the min, avg, and max run times for it. Ask Pablo to get this.
+
+## 2. Reducing the amount of data
+
+> Make your query only bring in the data it needs, and not more. Reduce the amount of data as early as possible.
+>
+
+This option is a simple optimization trick that can be used in many areas and it’s easy to pull off.
+
+The two holy devils of slow queries are large amounts of data and monster lookups/sorts. Both can be drastically reduced by simply reducing the amount of data that goes into the query, typically by applying some smart `WHERE` or creative conditions on a `JOIN` clause. This can be either done in your basic CTEs where you read from other models, or in the main `SELECT` of your model.
+
+Typically, try to make this as *early* as possible in the model. Early here refers to the steps of your query. In your queries, you will typically:
+
+- read a few tables,
+- do some `SELECTs`
+- then do more crazy logic downstream with more `SELECTs`
+- and the party goes on for as long and complex your case is
+
+Reducing the amount of data at the end is pointless. You will still need to read a lot of stuff early and have monster `JOIN`s , window functions, `DISTINCTs`, etc. Ideally, you want to do it when your first access an upstream table. If not there, then as early as possible within the logic.
+
+The specifics of how to apply this are absolutely query dependent, so I can’t give you magic instructions for the query you have at hand. But let me illustrate the concept with an example:
+
+### Only hosts? Then only hosts
+
+You have a table `stg_my_table` with a lot of data, let’s say 100 million records, and each record has the id of a host. In your model, you need to join these records with the host user data to get some columns from there. So right now your query looks something like this (tables fictional, this is not how things look in DWH):
+
+```sql
+with
+stg_my_table as (select * from {{ ref("stg_my_table") }}),
+stg_users as (select * from {{ ref("stg_users")}})
+
+select
+ ...
+from stg_my_table t
+left join
+ stg_users u
+ on t.id_host_user = id_user
+```
+
+At the time I’m writing this, the real user table in our DWH has like 600,000 records. This means that:
+
+- The CTE `stg_users` will need to fetch 600,000 records, with all their data, and store them.
+- Then the left join will have to join 100 million records from `my_table` with the 600,000 user records.
+
+Now, this is not working for you because it takes ages. We can easily improve the situation by applying the principle of this section: reducing the amount of data.
+
+Our user table in the DWH has both hosts and guests. Actually, it has a ~1,000 hosts and everything else is just guests. This means that:
+
+- We’re fetching around 599,000 guest details that we don’t care about at all.
+- Every time we join a record from `my_table`, we do so against 600,000 user records when we only truly care about 1,000 of them.
+
+Stupid, isn’t it?
+
+Well, imagining that our fictional `stg_users` tables had a field called `is_host`, we can rewrite the query this way to get exactly the same result in only a fraction of the time:
+
+```sql
+with
+stg_my_table as (select * from {{ ref("stg_my_table") }}),
+**stg_users as (
+ select *
+ from {{ ref("stg_users")}}
+ where is_host = true
+ )**
+
+select
+ ...
+from stg_my_table t
+left join
+ stg_users u
+ on t.id_host_user = id_user
+```
+
+It’s simple to understand: the CTE will now only get the 1,000 records related to hosts, which means we save performance in both fetching that data and having a much smaller join operation downstream against `stg_my_table`.
+
+## 3. Inlining CTEs
+
+> Replace CTEs with inline references to avoid optimization fences.
+>
+
+This one is a bit more brainy. I’ll split this bit in three parts: what is it, why does it work, and when it’s NOT a good option.
+
+### What is it
+
+As per our agreed good practices when building models, we always include references to upstream models we depend on as CTEs on the top of the file. So our models tend to look like this:
+
+```sql
+with
+ stg_some_table as (select * from {{ ref("stg_some_table") }}),
+ stg_some_other_table as (select * from {{ ref("stg_some_other_table") }}),
+ some_intermediate_thingie as (
+ select
+ ...
+ from stg_some_table
+ where
+ ...
+ group by
+ ...
+ )
+select
+ ...
+from stg_some_table st
+left join stg_some_other_table sot
+ on st.an_id = sot.an_id
+left join some_intermediate_thingie sit
+ on sot.another_id = sit.you_guessed_it_its_another_id
+```
+
+To inline the CTEs means to replace some or all of the CTEs in the model with direct references. For example, a first level of this could be to simply remove the first two CTEs:
+
+```sql
+with
+ some_intermediate_thingie as (
+ select
+ ...
+ from **{{ ref("stg_some_table") }}**
+ where
+ ...
+ group by
+ ...
+ )
+select
+ ...
+from **{{ ref("stg_some_table") }}** st
+left join **{{ ref("stg_some_other_table") }}** sot
+ on st.an_id = sot.an_id
+left join some_intermediate_thingie sit
+ on sot.another_id = sit.you_guessed_it_its_another_id
+```
+
+Or I could go all the way and remove all CTEs by using a subquery:
+
+```sql
+select
+ ...
+ from {{ ref("stg_some_table") }} st
+ left join {{ ref("stg_some_other_table") }} sot
+ on st.an_id = sot.an_id
+ left join (
+ **select
+ ...
+ from {{ ref("stg_some_table") }}
+ where
+ ...
+ group by
+ ...**
+ ) sit
+ on sot.another_id = sit.you_guessed_it_its_another_id
+```
+
+So, inlining is as simple as that: simply go around destroying CTEs and placing `ref` , subqueries and whatever else where needed to keep the query result the same.
+
+Inlining breaks our convention around how we write our models, and makes them harder to read. Never resort to inlining from scratch: always build models with CTEs, and only apply inlining if it’s critical for performance. And by critical, I truly mean critical. Don’t apply inlines to a 20 seconds-long model to make it 15 seconds if nobody cares about those 5 seconds.
+
+### Why does inlining work
+
+CTEs can become optimization fences for Postgres. But what the hell is that?
+
+If you write a main select with many subqueries, Postgres will try to play smart games with it to make it as fast as possible. For instance, if it finds that a `where` condition that you placed in the outermost `select` statement could be done in a subquery to make things faster, it will go ahead and do so (if this rings a bell, yes, this is the same as principle #1 of this guide. Postgres will also try to do it automatically for you at times).
+
+The issue is that sometimes, with CTEs, Postgres refuses to play these tricks. Instead, Postgres will commit to execute your CTE exactly as it is, without having any consideration for how it’s output is used later. This means missed opportunities to make the query faster. This is what we call an optimization fence.
+
+So, why this works should now be obvious: by throwing away the CTE and doing the same thing without it, you allow Postgres to leverage more optimization strategies.
+
+### When it’s not a good idea
+
+Most times, CTEs won’t be the cause of your issues. So, my advice when attempting this strategy this is to simply try out and measure the performance of your query with and without applying inlining. If your results make it obvious that inlining is not helping (or is hurting), then simply revert back to having CTEs.
+
+There is also one special situation where removing CTEs is probably a terrible idea. If you have:
+
+- A CTE that does some very costly query.
+- And that CTE is referenced in many other parts of the model multiple times.
+
+then that CTE is probably helping, not hurting performance. This is because Postgres will compute the CTE only once and allow all downstream operations to read from the temp result, whereas if you inline it, it might end up repeating the costly query multiple times.
+
+## 4. Change upstream materializations
+
+> Materialize upstream models as tables instead of views to reduce computation on the model at hand.
+>
+
+Going back to basics, dbt offers [multiple materializations strategies for our models](https://docs.getdbt.com/docs/build/materializations).
+
+Typically, for reasons that we won’t cover here, the preferred starting point is to use views. We only go for tables or incremental materializations if there are good reasons for this.
+
+If you have a model that is having terrible performance, it’s possible that the fault doesn’t sit at the model itself, but rather at an upstream model. Let me make an example.
+
+Imagine we have a situation with three models:
+
+- `stg_my_simple_model`: a model with super simple logic and small data
+- `stg_my_crazy_model`: a model with a crazy complex query and lots of data
+- `int_my_dependant_model`: an int model that reads from both previous models.
+- Where the staging models are set to materialize as views and the int model is set to materialize as a table.
+
+Because the two staging models are set to materialize as views, this means that every time you run `int_my_dependant_model`, you will also have to execute the queries of `stg_my_simple_model` and `stg_my_crazy_model`. If the upstream views model are fast, this is not an issue of any kind. But if a model is a heavy query, this could be an issue.
+
+The point is, you might notice that `int_my_dependant_model` takes 600 seconds to run and think there’s something wrong with it, when actually the fault sits at `stg_my_crazy_model`, which perhaps is taking 590 seconds out of the 600.
+
+How can materializations solve this? Well, if `stg_my_crazy_model` was materialized as a table instead of as view, whenever you ran `int_my_dependant_model` you would simply read from a table with pre-populated results, instead of having to run the `stg_my_crazy_model` query each time. Typically, reading the results will be much faster than running the whole query. So, in summary, by making `stg_my_crazy_model` materialize as a table, you can fix your performance issue in `int_my_dependant_model`.
+
+## 5. Switch the model to materialization to `incremental`
+
+> Make the processing of the table happen in small batches instead of on all data to make it more manageable.
+>
+
+Imagine we want to count how many bookings where created each month.
+
+As time passes, more and more months and more and more bookings appear in our history, making the size of this problem ever increasing. But then again, once a month has finished, we shouldn’t need to go back and revisit history: what’s done is done, and only the ongoing month is relevant, right?
+
+[dbt offers a materialization strategy named](https://docs.getdbt.com/docs/build/incremental-models) `incremental`, which allows you to only work on a subset of data. this means that every time you run `dbt run` , your model only works on a certain part of the data, and not all of it. If the nature of your data and your needs allows isolating each run to a small part of all upstream data, this strategy can help wildly improve the performance.
+
+Explaining the inner details of `incremental` goes beyond the scope of this page. You can check the official docs from `dbt` ([here](https://docs.getdbt.com/docs/build/incremental-models)), ask the team for support or check some of the incremental models that we already have in our project and use them as references.
+
+Note that using `incremental` strategies makes life way harder than simple `view` or `table` ones, so only pick this up if it’s truly necessary. Don’t make models incremental without trying other optimizations first, or simply because you realise that you *could* use it. in a specific model.
+
+
+
+dbt’s official docs (wisely) warning you of the dangers of incremental.
+
+## 6. End of the line: general optimization
+
+The final tip is not really a tip. The above five things are the easy-peasy, low hanging fruit stuff that you can try. This doesn’t mean that there isn’t more than you can do, just that I don’t know of more simple stuff that you can try without deep knowledge of how Postgres works beneath and a willingness to get your hands *real* dirty.
+
+If you’ve reached this point and your model still performing poorly, you either need to put your Data Engineer hat on and really deepen your knowledge… or call Pablo.
+
+## Bonus: how to make sure you didn’t screw up and change the output of the model
+
+The topic we are discussing in this guide is making refactors purely for the sake of performance, without changing the output of the given model. We simply want to make the model faster, not change what data it generates.
+
+That being the case, and considering the complexity of the strategies we’ve presented here, being afraid that you messed up and accidentally changed the output of the model is a very reasonable fear to have. That’s a kind of mistake that we definitely want to avoid.
+
+Doing this manually can be a PITA and very time consuming, which doesn’t help at all.
+
+To make your life easier, I’m going to show you a new little trick.
+
+### Hashing tables and comparing them
+
+I’ll post a snippet of code here that you can run to compare if any pair of tables has *exactly* the same comments. Emphasis on exactly. Changing the slightest bit of content will be detected.
+
+```sql
+SELECT md5(array_agg(md5((t1.*)::varchar))::varchar)
+ FROM (
+ SELECT *
+ FROM my_first_table
+ ORDER BY
+ ) AS t1
+
+SELECT md5(array_agg(md5((t2.*)::varchar))::varchar)
+ FROM (
+ SELECT *
+ FROM my_second_table
+ ORDER BY
+ ) AS t2
+```
+
+How this works is: you execute the two queries, which will return a single value each. Some hexadecimal gibberish.
+
+If the output of the two queries is identical, it means their contents are identical. If they are different, it means there’s something different across both.
+
+If you don’t understand how this works, and you don’t care, that’s fine. Just use it.
+
+If not knowing does bother, you should go down the rabbit holes of hash functions and deterministic serialization.
+
+### Including this in your refactoring workflow
+
+Right, now you know how to make sure that two tables are identical.
+
+This is dramatically useful for your optimization workflow. You can know simply:
+
+- Keep the original model
+- Create a copy of it, which is the one you will be working on (the working copy)
+- Prepare the magic query to check their contents are identical
+- From this point on, you can enter in this loop for as long as you want/need:
+ - Run the magic query to ensure you start from same-output-state
+ - Modify the working copy model to attempt whatever optimization thingie you wanna try
+ - Once you are done, run the magic query again.
+ - If the output is not the same anymore, you screwed up. Start again and avoid whatever mistake you made.
+ - If the output is still the same, you didn’t cause a change in the model output. Either keep on optimizing or call it day.
+- Finally, just copy over the working copy model code into the old one and remove the working copy.
+
+I hope that helps. I also recommend doing the loop as frequently as possible. The less things you change between executions of the magic query, the easier is to realize what caused errors if they appear.
+
+”](image%2040.png)
+
+ Donald Knuth - "[StructuredProgrammingWithGoToStatements](http://web.archive.org/web/20130731202547/http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf)”
\ No newline at end of file
diff --git a/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md:Zone.Identifier b/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Busy man’s guide to optimizing dbt models performa b0540bf8fa0a4ca5a6220b9d8132800d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md b/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md
new file mode 100644
index 0000000..aed6f5b
--- /dev/null
+++ b/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md
@@ -0,0 +1,13 @@
+# Can’t backup single tables from DWH in DBeaver
+
+You might face the issue of DBeaver throwing an error at you when trying to backup individual tables with its backup feature.
+
+If you look into the logs and find complaints about the version, the issue is probably that your local (in your laptop) Postgresql binaries are not version 16. This version mismatch with the server causes the issue.
+
+To fix it, you need to install the Postgresql client binaries for version 16 and then select them in Dbeaver. The steps are roughly:
+
+- Go to [https://www.enterprisedb.com/downloads/postgres-postgresql-downloads](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads)
+- Download version 16 for windows
+- Install it.
+ - During install, you only need to Stack Builder and Command Line Tools. No need to install the full Postgresql install, unless you want an actual database in your laptop that is.
+- In DBeaver, try to run a backup, and you’ll see a little button that reads Local Client in the bottom left corner. Go there and add a new client entry pointing to the stuff you installed, which will most probably be living in: `C:\Program Files\Postgresql\16`
\ No newline at end of file
diff --git a/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md:Zone.Identifier b/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Can’t backup single tables from DWH in DBeaver df6fc66189db415faa9715376832e5ba.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md b/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md
new file mode 100644
index 0000000..17750fb
--- /dev/null
+++ b/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md
@@ -0,0 +1,56 @@
+# Careful with the DB: How to work in SQL Server without giving Pablo a stroke
+
+This is a brief guide in how to work on the SQL Server DB we have for our backend without giving the Data Team a hard time.
+
+# TLDR
+
+Short and sweet:
+
+- Give all tables a Primary Key.
+- `UpdatedDate` fields are massively useful for the Data team. Do include them where possible, specially in tables that grow large over time.
+- Respect `UpdatedDate` fields.
+- If you don’t respect the `UpdatedDate`, let the Data team know. No hard feelings, hiding it will just be worse.
+- Before deleting/renaming tables and columns, please give us a call to check that the change won’t cause incidents in downstream dependencies.
+
+# SQL Server and the Data Team
+
+## What we do
+
+The Data Team centralizes a lot of Superhog’s data sources in the DWH. There are good reasons for this that go beyond the scope of this document. Our SQL Server in the backend (which we usually refer to as Core) is one of theses sources.
+
+To achieve this, we replicate Core’s data in the DWH, regularly syncing data on different frequencies, from daily to hourly.
+
+## How we do it
+
+We use a tool called Airbyte. This replication is very simple: we just move the data as-is, without any kind of transformation at all. It’s not the typical ETL, but rather just EL (extract, load).
+
+There are two ways we can replicate each individual table from Core in the DWH:
+
+- **Full refresh:** on a scheduled basis, we destroy the replicated table in the DWH and read all the data from the source again.
+- **Incremental**: we only pick up new changes. If a record gets updated in Core, we update it in the DWH as well. If a record gets created in Core, we create it in the DWH as well.
+
+As you might be guessing, we prefer incremental loads: they are faster and lighter on both source and destination. But these are only possible for tables that have a specified PK and a well-maintained `UpdatedDate` field.
+
+We highly appreciate creating tables with `UpdatedDate` fields (or `CreatedDate`, if the table is fully immutable). This is specially important for tables that are large or will eventually grow into being large (just to throw a number here for the sake of not being ambiguous, let’s say a million records makes a table large).
+
+## Mistakes, ways to make them and remediation
+
+### Not respecting `UpdatedDate`
+
+Most common situation. This feels easy to avoid, but there are sneaky ways to make mistakes with this. Here’s a list of examples of things that led to this happening in the past:
+
+- Code bugs.
+- You went to run an `UPDATE` manually in production because some emergency required it, but you didn’t forget to include the `UpdatedDate` field with a `GETUTCDATE()`.
+- You’ve created a nice seeding method within the migrations of the database, but the seeding statement doesn’t update the `UpdatedDate`
+
+The technical impact of this is that the records where the field was not honored will *not* be replicated in the DWH, causing the data in the DWH and Core to drift. The business impact can go all the way from none at all to multiply the revenue in a TMT report by 25.
+
+If it does happen, please, get in touch and let us know. We can solve the situation by running a full-refresh between Core and the DWH. Depending on the size of the table, this may be problematic and cause some disruption. But leaving the drift be will surely be worse.
+
+### Breaking schema changes (deletes/renames) without alerting the Data Team
+
+If you remove or rename a table or column that we are replicating in the DWH without notice, our pipelines and all their downstream dependencies will break.
+
+The easiest way to prevent this is to have us in the loop. We make our best effort to quickly let you know if your change needs coordination with us. 95% we will just let you know on the same day that the change doesn’t affect us and give you a green light straight away.
+
+If this goes uncommunicated, the remediation will be fully on our side: we will surely notice because smoke will start coming out of the pipelines and DWH. We will be happy to run a post-mortem together to understand what went wrong to prevent it in the future.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md:Zone.Identifier b/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Careful with the DB How to work in SQL Server with 405c497b76c74bb29dcc790bc59928fd.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md b/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md
new file mode 100644
index 0000000..03942f2
--- /dev/null
+++ b/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md
@@ -0,0 +1,49 @@
+# Choosing
+
+## Intro
+
+First step is to choose which one we want to go for.
+
+The go-to names in the industry are Airflow, Prefect and Dagster. After a lot of unstructured research during the past year, I’ve decided to simply narrow it down to Prefect and Dagster. Both options seem more feature rich, have better integrations with our stack, and have less people complaining about them than Airflow. Airflow is what everyone uses and everyone complains about, so we might just as well directly dodge the bullet.
+
+Between Prefect and Dagster: I can’t pick one yet. I worked a lot with prefect and I know it’s good, but that was working on Prefect 1, and they are already on version 3, so things might have changed a lot.
+
+On the other hand, I haven’t tried Dagster, but I’ve heard lovely things about it. Apparently, it’s data asset abstraction makes pipelines and governance incredibly better. Plus, it has very nice integrations with Airbyte and dbt, way better than what I’ve seen in other tools.
+
+To be able to choose between the two, I’ve decided to run a little bit of a hello-world exercise with both of them. The plan is to do the same stuff on both, document it, discuss with Uri, and then make a decision. Once that’s done, we start planning how do we do the production deployment and how we move over executions to there.
+
+## Orchestration hello-world
+
+These are the steps I would like to run with both.
+
+- Try to deploy it locally
+- Deploy a local Airbyte and a local DWH alongside
+- Try to setup a full Xero pipeline
+ - This means, setting up an Airbyte connection and running locally the dbt pipeline, for the Xero tables (`dbt run -s models/staging/xero+`)
+ - The pipeline should run everything: airbyte, and all the layers of dbt stuff.
+ - also, run related dbt tests
+- Try to setup a full `xexe` pipeline
+ - This means, triggering runs made with the CLI interface of `xexe` or by importing it as a library, and then running locally the dbt pipeline for the downstream currency related tables (not including the gazillion DWH tables that depend on them. Just currency stuff down to `int_simple_exchange_rates`).
+ - also, run related dbt tests
+
+Besides that, I might also try to:
+
+- Send messages through slack for alerts
+- Deploy on Azure (not the final, production deployment by any means)
+
+Some areas where I would like to take thorough notes on the features:
+
+- Retry logic
+- Pipeline logs
+- dbt logs
+- Warnings and alerts, perhaps even incident management
+- Scalability features with parametrization
+- Secret management
+- Pipeline version control
+- Triggering and scheduling capabilities
+- API for external services to interact
+- ownership and governance of pipelines
+- how the hell can we play it smart with backfills
+- Development and deployment flow
+
+[Dagster hello-world](Dagster%20hello-world%20723420fec494478b9c89d308b0f213a7.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md:Zone.Identifier b/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Choosing b305d0910ef446578cc28c3b79042ea1.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md b/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md
new file mode 100644
index 0000000..0ad88a0
--- /dev/null
+++ b/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md
@@ -0,0 +1,64 @@
+# Churning Deals Warning – Early Alert System
+
+## Context & Objective
+
+The current Churn Report focuses on deals that are **already churned** — either through a **formal cancellation** or after **12 consecutive months of inactivity**. However, by the time a deal is flagged here, it's often **too late to take action**.
+
+To help **Account Managers** intervene **earlier**, we're introducing a new **“Churning Deals Warning” section**. This section will identify **at-risk accounts** based on behavioural patterns, enabling proactive outreach and retention efforts.
+
+---
+
+## Proposed Criteria for “At-Risk” Accounts
+
+We aim to flag accounts showing early signs of disengagement or decline. The criteria below are initial ideas and open for iteration.
+
+### 1. **Sharp Drop in Monthly Bookings**
+
+- Flag deals where **monthly bookings have dropped by 70% or more** compared to the average of the **previous 3 to 6 months**.
+- Only apply this check to deals that had a **minimum threshold of activity** in the past (e.g., at least 10 bookings/month on average) to avoid noise from small or sporadic users.
+
+### 2. **Sustained Inactivity**
+
+- Flag deals that have had **no activity (0 bookings)** in the **last 3 to 6 months**.
+- This helps catch accounts before they hit the 12-month churn threshold.
+
+### 3. **Step Change in Listings**
+
+- Flag deals that have seen a **significant drop in the number of active listings** (e.g., 50%+ drop compared to 3-month average).
+- A drop in listings often precedes a drop in bookings.
+
+### 4. Other possible options
+
+- Track accounts that haven’t had any contact with their Account Manager in over 6 months.
+- Track accounts that have bad CSAT score on their bookings (less than 2-3).
+
+---
+
+### Purpose of These Flags
+
+The main goal is **not to automate action**, but to **guide Account Managers' attention** to potentially declining accounts. With earlier signals, they can:
+
+- Reach out before accounts fully disengage.
+- Investigate potential issues (e.g., pricing, onboarding, product fit).
+- Offer support, incentives, or solutions.
+
+---
+
+### Implementation Ideas
+
+- Integrate this section into the existing **Churn Report**, under a new tab or visual.
+- Allow filters by:
+ - Account Manager
+ - Warning reason
+ - Region / Country
+ - Deal Size (segmentation)
+ - Business Scope
+- Show “reason for flag” per deal (e.g., “Bookings dropped 75% in last month”).
+
+---
+
+## Next Steps
+
+- Finalize flagging criteria with input from AMs and data team.
+- Build the logic and integrate into dbt / Power BI.
+- Gather feedback from pilot usage.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md:Zone.Identifier b/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Churning Deals Warning – Early Alert System 1d00446ff9c98056b4f9fb5177e4e64d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md b/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md
new file mode 100644
index 0000000..644fa84
--- /dev/null
+++ b/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md
@@ -0,0 +1,19 @@
+# Connecting to Core
+
+The [Core database](https://www.notion.so/Superhog-Core-Database-70786af3075e46d4a4e3ce303eb9ef00?pvs=21) is the main database for Superhog’s backend.
+
+To access it:
+
+1. Make sure you are connected to the [Data VPN](VPN%20Set%20up%2001affb09a9f648fbad89b74444f920ca.md).
+2. Fetch the reading user credentials from the `Dev` shared folder in Keeper. If you don’t have access to it, ask Pablo.
+3. Use the following connection details:
+ 1. host: superhog.database.windows.net
+ 2. port: 1433
+ 3. database: `live` (if you want to read from production) or `staging` (if you want to read from staging).
+ 4. Authentication type: `SQL Server Authentication`
+
+Be aware that this instructions are specific to Data team members. You might see different instructions elsewhere, feel free to ignore them.
+
+The details above will grant you read access. If you need write access… you really shouldn’t write there at all. Please check your needs with Pablo and Ben Robinson. Never connect to this database with the intention of writing into it unless you have crystal clear, explicit and documented approval from Ben R.
+
+Also note that there are certain networking whitelist exceptions in place to allow Data team members to connect to it. These include our office locations and home IPs. Should your IP change or the office move, please get in touch with Ben Robinson to request an update in the whitelisting rules.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md:Zone.Identifier b/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Connecting to Core 6ecf68bb25bc489ea8f38ac971e1a2c1.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md b/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md
new file mode 100644
index 0000000..f211774
--- /dev/null
+++ b/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md
@@ -0,0 +1,14 @@
+# Connecting to the DWH
+
+Before you connect, make sure you have [set up the VPN](VPN%20Set%20up%2001affb09a9f648fbad89b74444f920ca.md). Otherwise, you won’t be able to connect.
+
+These are the details of our [DWH](https://www.notion.so/DWH-78ce5f76598d49d185fa5fc49a818dc4?pvs=21):
+
+- host: [superhog-dwh-prd.postgres.database.azure.com](http://superhog-dwh-prd.postgres.database.azure.com/)
+- port: 5432
+- database: dwh
+- user and password: there are multiple roles in DWH and both personal and service accounts. Check with Pablo what would be the right user for you depending on your needs.
+
+Unless you have extraordinary needs, these are the only fields you need to modify in the connection creation window in DBeaver.
+
+You can confirm that your new connection works by using the `Test connection...` button on the connection creation window.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md:Zone.Identifier b/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Connecting to the DWH b7872e2027d041ffac1363b9c2615971.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md b/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md
new file mode 100644
index 0000000..0e33ae7
--- /dev/null
+++ b/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md
@@ -0,0 +1,76 @@
+# Cool tools
+
+A list of tools we’ve come across and look interesting, but we haven’t assessed, tested or deployed yet:
+
+## Visualization and data exploration
+
+### Dashboard-oriented
+
+- https://www.lightdash.com/
+- https://redash.io/
+- https://superset.apache.org/
+- https://preset.io/
+
+### Notebook-oriented
+
+- https://www.querybook.org/
+- https://popsql.com/
+- https://jupyterhub.readthedocs.io/en/stable/index.html
+ - And this version seems fitting for our needs: https://tljh.jupyter.org/en/latest/#
+- [https://evidence.dev/](https://evidence.dev/)
+ - Open source. Has cloud options. Markdown + SQL notebooks, with cached data, batch built. Has integration with dbt. Seems like a great chance to go cloud with them, knowing that we can always part ways if needed and self host it.
+- https://observablehq.com/
+ - Shared by Aled
+
+### Hybrid
+
+- https://hex.tech/
+- https://mode.com/
+
+## dbt
+
+- A tool to run quality checks on dbt as pre-commit hooks: https://github.com/dbt-checkpoint/dbt-checkpoint
+- A tool to check and enforce dbt project conventions: https://github.com/godatadriven/dbt-bouncer
+- https://datarecce.io/docs/get-started/ ← Data diff for PRs
+- https://www.synq.io/integrations/dbt/
+ - Incident management for dbt tests, as well as data product definition and ownership.
+ - Get an alert, triage it, categorize, and route towards the right owner according to what failed
+
+## Semantic Layer
+
+- A good overview on what it is and what it covers: https://airbyte.com/blog/the-rise-of-the-semantic-layer-metrics-on-the-fly
+- The clearest option we have: https://cube.dev/
+- A copy-cat: https://github.com/synmetrix/synmetrix
+
+## Data cataloguing, documentation, lineage
+
+- https://github.com/opendatadiscovery/awesome-data-catalogs
+- https://open-metadata.org/
+
+## DWH
+
+- [https://pgt.dev/extensions/pgaudit](https://pgt.dev/extensions/pgaudit) pgaudit, a postgres extension that can log EXPLAIN ANALYZE statements from queries
+- https://github.com/citusdata/citus (an extension that could provide us with columnar storage)
+ - side stuff for fun: a fun benchmark on a poor-man’s columnar storage on Postgres: https://www.brianlikespostgres.com/poor-mans-column-oriented-database.html
+- https://www.postgresql.org/docs/current/postgres-fdw.html
+ - Foreign Data Wrappers extension. We could use this in our local environment dwh up so that we can read the sync schemas from production instead of having to clone data.
+ - The pro is that the dev experience would be way more smooth and fast. No more dumping and restoring, no weird inconsistencies.
+ - The cons are that we would depend on the prd db to develop, and that we would consume resources from the production dwh for development, which is not ideal. Also, the dump and restore approach opens a window to being hacky and safely manipulating the data you are working with.
+- Postgres full text search: [https://supabase.com/blog/postgres-full-text-search-vs-the-rest](https://supabase.com/blog/postgres-full-text-search-vs-the-rest)
+ - This could be interesting to support filtering situations in Dashboard where users struggles with the standard strictness of string matching (in grug speak: make it easy user find name, no care about upper/lower case, space, word order, etc).
+ - More articles here: [https://gist.github.com/cpursley/e3586382c3a42c54ca7f5fef1665be7b](https://gist.github.com/cpursley/e3586382c3a42c54ca7f5fef1665be7b)
+- https://postgresql-anonymizer.readthedocs.io/en/stable/
+ - A postgres extension that allows anonymizing of data.
+ - It has this incredibly attractive features where you can declare certain fields to be masked/faked, then set that some Postgres role is not allowed to see the real thing. The postgres role can then query the table, but will see the mask/faked colums masked/faked.
+
+## Other
+
+### Diagram tools
+
+https://diagrams.mingrammer.com/
+
+### Mathesar
+
+Mathesar is a web application that makes working with PostgreSQL databases both simple and powerful.
+
+https://github.com/mathesar-foundation/mathesar
\ No newline at end of file
diff --git a/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md:Zone.Identifier b/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Cool tools afdf8f69b4b0498aaee66ad1a520cc0d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md b/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md
new file mode 100644
index 0000000..fe67655
--- /dev/null
+++ b/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md
@@ -0,0 +1,40 @@
+# Credit Notes Downstream Analysis
+
+# Base Model
+
+`int_xero__credit_notes`
+
+
+
+# Downstream Models
+
+- `int_xero__credit_note_line_items`
+ - No aggregations
+- `int_xero__sales_denom_mart`
+ - No aggregations
+- `int_xero__sales_monthly_trends`
+ - Aggregated at accounting financial level, need to update seed model with Resolution account codes
+- `xero__credit_note_line_items`
+ - No aggregations
+- `xero__credit_notes`
+ - No aggregations
+- `xero__net_fees`
+ - Double check on this, but it shouldn’t affect since it’s grouped by `item_code` which is null for Host Resolution payments.
+ Indeed this change doesn’t affect for the reason previously mentioned
+- `xero__net_fees_by_deal`
+ - This model would be affected by it since it’s aggregating all invoices and credit notes regardless of the type they are by `date` and `id_deal` .
+ - Possible solution would be to add a new field `is_resolution_payout` in `xero__credit_notes` and use this field to filter out all credit notes from Resolutions Payouts
+- `xero__sales_denom_mart`
+ - No aggregations
+- `xero__sales_monthly_trends`
+ - No aggregations
+ - We do need to include the new `account_code` for all Resolutions Payouts to `stg_seed__accounting_aggregations`
+- `int_kpis__metric_daily_invoiced_revenue`
+ - It does add new records but with all dimension values as 0 since the model adds the values filtering be `accounting_root_aggregation` or `accounting_kpis_aggregation` set for the different account codes in `stg_seed__accounting_aggregations`
+ This already adds records with all dimension values as 0 for every other account code that is not in `stg_seed__accounting_aggregations`
+- `int_kpis__metric_daily_total_and_retained_revenue`
+ - No impact since it aggregates by the different revenue sources
+- `int_kpis__metric_monthly_invoiced_revenue`
+ - No impact since it aggregates by the different revenue sources
+- `int_kpis__metric_mtd_invoiced_revenue`
+ - No impact since it aggregates by the different revenue sources
\ No newline at end of file
diff --git a/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md:Zone.Identifier b/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Credit Notes Downstream Analysis 1de0446ff9c980d9a0c1fd9477d7fcb7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md b/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md
new file mode 100644
index 0000000..caaf284
--- /dev/null
+++ b/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md
@@ -0,0 +1,52 @@
+# DBeaver set up
+
+DBeaver is the de facto SQL client in the Data team.
+
+Feel free to use anything you want, but if you don’t have a preference, picking DBeaver is probably a decent option.
+
+## Installing
+
+- Make sure to get DBeaver **Community Edition**. This is the free, open-source version, and it’s what we’re currently using.
+- You can download from here: [https://dbeaver.io/download/](https://dbeaver.io/download/)
+- Feel free to pick the most recent version, we haven’t determined to freeze on any specific one.
+- Install with all default settings
+
+## Creating connections to Postgres
+
+- To create a new connection to Postgres you can click on the Plug + button on the top left corner of DBeaver and selecting PostgreSQL as your database, which will lead you to a screen like this:
+
+
+
+- [Connecting to the DWH](Connecting%20to%20the%20DWH%20b7872e2027d041ffac1363b9c2615971.md) here you can check all details to connection to the DWH.
+- *Optional, only for Data Team members*
+
+ To create a connection to Postgres for your own Local DWH you can find all the step by step information in the DWH DBT Project **(data-dwh-dbt-project\dev-env\local_dwh.md)**. Here you will find all the requirements needed and how to set up the `dwh` and `dwh_hybrid` for you to develop locally.
+
+
+ **
+
+## Creating connections to SQL Server
+
+- To create a new connection to a SQL Server, just like before you can click on the Plug + button on the top left corner of DBeaver and selecting SQL Server as your database, which will lead you to a very similar screen as shown before so you can add all the necessary details for the connection.
+- [Connecting to Core](Connecting%20to%20Core%206ecf68bb25bc489ea8f38ac971e1a2c1.md) here you can find all details to connect to the Core database.
+
+## Tips and gotchas
+
+- See all databases when connecting to postgres server: For some weird default setting from dbeaver you won’t be able to see all databases, you might see this problem after creating your local copy for `dwh` and `dwh_hybrid` , it is very simple to solve this but also easy to forget. Just go to your database, go into edit connection and check on the `Show all databases` box which you should see in the Main tab. This could change for future versions of DBeaver, but it shouldn’t be too hard to find.
+
+
+
+- To deactivate the auto upper case of key words in DBeaver, in case it it’s annoying for you, you can go to:
+ - Window → Preferences → Editors → SQL Editor → Formatting
+
+ Here you can configure the formatting of your queries on DBeaver however you like
+
+ 
+
+- If you want to connect to DWH, make sure to review VPN guide, otherwise this won’t work. [VPN Set up](VPN%20Set%20up%2001affb09a9f648fbad89b74444f920ca.md)
+- DBeaver makes it easy to export data in various formats (CSV, JSON, SQL scripts, etc.). You can use some of the calculation features at the bottom of the results to obtain some quick information like the total amount of rows from the result of your query, to load all the results of the query so you can export all data (by default it loads the first 200 rows), **though be careful with this because it might be too much data and computationally demaning,** and the export data button.
+
+
+
+- You can also filter some of the results of your query directly on the table, you can just simply click on the blue arrow to the right of each column and find multiple filters to easily analyse results, this is only recommended for small queries.
+- If you have any issues backing up tables in DBeaver or creating dumps, check out the following documentation [Can’t backup single tables from DWH in DBeaver](Can%E2%80%99t%20backup%20single%20tables%20from%20DWH%20in%20DBeaver%20df6fc66189db415faa9715376832e5ba.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md:Zone.Identifier b/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/DBeaver set up 12e0446ff9c980de9ac2dc3bb0e9b45d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md b/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md
new file mode 100644
index 0000000..fd7cb38
--- /dev/null
+++ b/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md
@@ -0,0 +1,83 @@
+# DE survival without Pablo
+
+Reference page for when Pablo is not around and something goes wrong.
+
+# General stuff
+
+- Remember, the deployment instructions for the Data infra [live here](https://guardhog.visualstudio.com/Data/_git/data-infra-script?path=/platform-overview.md). When in doubt on how something is set up, or how something could be re-deployed, check there. I’ll skip a lot of details on how things are built here because you can find them there.
+- If you feel unsure about touching stuff you don’t understand in Azure, ask Ben Robinson for help.
+
+# DWH
+
+| *when…* | *then…* |
+| --- | --- |
+| DWH is not responsive | - Troubleshoot to find out if it’s a network issue (can’t reach DWH) or if the DWH is effectively turned off/locked.
+- If it’s a VPN issue, go to VPN section.
+- If it’s a DWH issue, try rebooting from Azure portal.
+- If that doesn’t do the trick, try restoring a backup. DWH has 1 backup per day, up to 7 days into the past. |
+| DWH runs out of space | - Visit the Azure Portal and increase the disk size of the database. |
+
+# VPN/Jumphost machine
+
+| *when…* | *then…* |
+| --- | --- |
+| VPN becomes unresponsive | - Try to run a reboot on the jumphost machine. Wireguard is set as a `systemd` service, so it should start again on boot.
+- If that doesn’t work, you’ll have to check the logs to understand what’s wrong. SSH into the machine and run `sudo journalctl -u wg-quick@wg0.service` to do so.
+- You can run `sudo systemctl status wg-quick@wg0.service` to simply check if the service is running. If it’s running fine, you should see a green `active` |
+| You need to give someone new/another device access | - Create a new key pair.
+- Use the existing key configurations or the infra script documentation to understand how to add it on both the VPN server and the client device.
+- Do not try to share your existing key with more devices: each keypair should only be active in one device at a time. |
+| You lock yourself out of the VPN because it stopped working and you can’t SSH into the jumphost | Don’t despair, this is still solvable. Add an exceptional rule in the Azure Network Security Group (NSG) that the Jumphost is using to **TEMPORARILY** allow yourself to SSH on port 22 on the public IP.
+
+**REMEMBER TO REMOVE THE EXCEPTION ONCE YOU ARE DONE!!!!!!**
+
+You can check details on how to do this in the `data-infra-script` repository. |
+
+# dbt
+
+| *when…* | *then…* |
+| --- | --- |
+| You need to execute `dbt` on demand | - SSH into `airbyte-prd` machine.
+- Execute the following command `/bin/bash /home/azureuser/run_dbt.sh`
+- You can check the execution logs in `/home/azureuser/dbt_run.log` |
+| The `dbt run` is failing and you don’t understand why | - SSH into `airbyte-prd` machine.
+- Check the execution logs in `/home/azureuser/dbt_run.log` to find the errors
+- Pull the thread from there. |
+| You need to run a full refresh | - I would suggest making a sneaky `dbt run` from your own laptop, pointing to `prd` and with the `--full-refresh` flag active. Be careful since this can trigger very heavy runs. |
+
+# Airbyte
+
+| *when…* | *then…* |
+| --- | --- |
+| You need to add a new table from Core into the DWH | - Pick the right existing connection depending on the source schema and incrementality (full refresh vs incremental).
+- Add the table stream. |
+| A sync job is failing | - Visit the Airbyte UI.
+- Find the failed job and read the logs.
+- Pull the thread from there. |
+| Upstream changes in a data model are creating conflicts | - This is a tricky area because there are many ways to handle it.
+- One option is to convince whoever owns the upstream data model to go back to the previous state. If so, the issue fixes itself without changing anything in Airbyte.
+- If this isn’t possible, you might decide to Reset the stream and sync all from source again. Bear in mind this can’t be easily reverted and might break downstream `dbt` models. |
+| Data between Core and Airbyte has run out of sync for some mistake, like for instance violated `UpdatedDate` fields in Core | - Visit the affected connection and streams and reset them. |
+
+# `airbyte-prd` machine
+
+This machine is where Airbyte, dbt and `xexe` run.
+
+- Airbyte is deployed as a series of docker containers orchestrated with docker compose. The docker compose file can be found in `/home/azureuser/airbyte/`
+- Both dbt and `xexe` run on a scheduled basis. Their execution is triggered by `cron` (commands live on `azureuser`'s crontab) and the running scripts are on `/home/azureuser/run_dbt.sh` and `/home/azureuser/run_xexe.sh`
+
+This is typically uber-stable and nothing goes wrong.
+If something smells fishy, a simple reboot should do the trick, as all services will start working on boot.
+
+If any of the services stop working, here’s where you can go to research:
+
+- Airbyte → See the logs of the containers. If the UI still works, you can also read the logs there.
+- dbt → Check the file `/home/azureuser/dbt_run.log`
+- `xexe` → Check the file `/home/azureuser/xexe_run.log`
+- `anaxi` → Check the file `/home/azureuser/anaxi_run.log`
+
+# Power BI Gateway
+
+The PBI Gateway software is running on a Windows VM named `pbi-gateway-prd`.
+
+The software is just installed there. I have no clue on how anything could go wrong: the service has been working like a charm since day one.
\ No newline at end of file
diff --git a/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md:Zone.Identifier b/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/DE survival without Pablo 2881b273bcaa46e4b0ae5fac1c1ba728.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md b/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md
new file mode 100644
index 0000000..18c3e6e
--- /dev/null
+++ b/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md
@@ -0,0 +1,125 @@
+# Dagster hello-world
+
+## First shot following the quickstart
+
+I’m going to begin by following this: [https://docs.dagster.io/getting-started/quickstart](https://docs.dagster.io/getting-started/quickstart)
+
+Even though there’s also this guide: [https://docs.dagster.io/guides/running-dagster-locally](https://docs.dagster.io/guides/running-dagster-locally)
+
+- I’ve made a new directory dedicated to this called `dagster-hello-world`.
+- I’ll try to use `poetry`, so I start with a good old `poetry init` to get the project started. Just accepted all defaults, no fancy configs.
+- In there, I’m running a `poetry add dagster dagster-webserver` just like that, with no `venv` or anything. Straight from the global Python runtime.
+- Got these errors back
+
+ ```
+ Creating virtualenv dagster-hello-world-bcH0h4Hq-py3.10 in /home/pablo/.cache/pypoetry/virtualenvs
+ Using version ^1.8.12 for dagster
+ Using version ^1.8.12 for dagster-webserver
+
+ Updating dependencies
+ Resolving dependencies... (0.2s)
+
+ The current project's supported Python range (>=3.10,<4.0) is not compatible with some of the required packages Python requirement:
+ - dagster-webserver requires Python <3.13,>=3.8, so it will not be satisfied for Python >=3.13,<4.0
+
+ Because no versions of dagster-webserver match >1.8.12,<2.0.0
+ and dagster-webserver (1.8.12) requires Python <3.13,>=3.8, dagster-webserver is forbidden.
+ So, because dagster-hello-world depends on dagster-webserver (^1.8.12), version solving failed.
+
+ • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
+
+ For dagster-webserver, a possible solution would be to set the `python` property to ">=3.10,<3.13"
+
+ https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
+ https://python-poetry.org/docs/dependency-specification/#using-environment-markers
+ ```
+
+- After fucking around for a bit, I got it working by changing the `pyproject.toml` file. The change was this line:
+
+ ```toml
+ [tool.poetry.dependencies]
+ python = "^3.10"
+ ```
+
+ to this (`^` operator removed)
+
+ ```toml
+ [tool.poetry.dependencies]
+ python = "3.10"
+ ```
+
+- Now I tried to run `dagster dev` . Not working at all.
+
+I’m starting from scratch, that was weird and nothing works. I started here: [https://docs.dagster.io/getting-started/install#installing-dagster-using-poetry](https://docs.dagster.io/getting-started/install#installing-dagster-using-poetry)
+
+Now I’m going to start from here instead: [https://docs.dagster.io/getting-started/quickstart](https://docs.dagster.io/getting-started/quickstart)
+
+- First, I run this
+
+ ```bash
+ git clone https://github.com/dagster-io/dagster-quickstart && cd dagster-quickstart
+ ```
+
+- Then (it wasn’t mentioned in the guide) I create a python `venv` with `python3 -m venv venv`
+- Then I activate the `venv` and run `pip install -e ".[dev]"`
+- Then `dagster dev`... and the UI appears
+- I run the pipelines and materialize the examples as the quickstart indicates. It’s roughly clear.
+- I now understand that an `asset` in Dagster lingo is pretty much declaring:
+ - That a DAG node exists and has certain features
+ - The code that materializes it
+- I’m going to need more practice to visualize more clearly how would we use this.
+
+## dbt example
+
+The docs from dagster have a guide on how to integrate with a dbt project. I’ll try to do that with our project.
+
+Link: https://docs.dagster.io/integrations/dbt/using-dbt-with-dagster
+
+- I begin by making a copy of our dbt git repo as-is
+- Then I do some pip installs on my main Python interpreter:
+
+ ```bash
+ pip install dagster-dbt dagster-webserver
+ ```
+
+- Then I run
+
+ ```bash
+ dagster-dbt project scaffold --project-name sh_dagster_dbt --dbt-project-dir ./dbt-dagster-playground/
+ ```
+
+- Ok, apparently this has created an entire new Dagster project at `home/pablo/sh_dagster_dbt`. This new project has a hardcoded filepath reference to the dbt project folder. I must say this is shaky as hell, but well, at least it’s transparent and visible.
+- Now, from the root of the dagster project folder, I run `dagster dev` to get the UI started and I see the whole dbt project displayed. Pretty neat. All dependencies are there, and dagster even parses the documentation and displays it.
+- I have a very tempting `Materialize All` button, which I click like a kid not knowing what will happen.
+- Stuff blew up everywhere, apparently complaining on the database not being reachable. The `Materialize All` button tried to run both the models and their tests. That means it was trying to run with the default profile (`dwh_hybrid`). Reasonable, but I would expect some way to pick the profile in dagster. Can’t find it anywhere.
+- I started my local postgres and tried to materialize some staging models. It works! I can now travel through the assets graph and select any subset of models and run them. The UI shows the last time the model was materialized and logs on the run.
+- I also found some hidden menu where the config of the run can be modified, including picking what’s the profile that should be used. I think I’m getting the pattern here:
+ - Run templates can be defined through the UI by drag and drop…
+ - … but for the stable stuff, what you want to do is to define it in Python files
+- Anyways, before trying to build a “pipeline” in the old sense, I want to see if I can also add airbyte here in the picture. If I can, then I can try to map dependencies and make a joint pipeline across airbyte and dbt.
+- I have an old locally deployed airbyte compose, so I’ll just use that to mess around.
+- I’ve had to `pip install dagster-airbyte`
+- I’ve created a file in `sh_dagster_dbt/airbyte.py` and added this bit:
+
+ ```python
+ from dagster_airbyte import AirbyteResource
+
+ airbyte_instance = AirbyteResource(
+ host="localhost",
+ port="8000",
+ # If using basic auth, include username and password:
+ username="airbyte",
+ password="airbyte",
+ )
+ ```
+
+- Through the Airbyte UI, I’ve created a source pointing to our Xero production instance and made a connection with my local DWH.
+
+## Links
+
+- Docs: [https://docs.dagster.io/getting-started](https://docs.dagster.io/getting-started)
+- MDS template: [https://github.com/dagster-io/dagster/tree/master/examples/assets_modern_data_stack](https://github.com/dagster-io/dagster/tree/master/examples/assets_modern_data_stack)
+- General repo: [https://github.com/dagster-io/dagster/tree/master](https://github.com/dagster-io/dagster/tree/master)
+- dbt + dagster tutorial: https://docs.dagster.io/integrations/dbt/using-dbt-with-dagster
+- airbyte + dagster tutorial: https://docs.dagster.io/integrations/airbyte
+-
\ No newline at end of file
diff --git a/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md:Zone.Identifier b/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Dagster hello-world 723420fec494478b9c89d308b0f213a7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md b/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md
new file mode 100644
index 0000000..39afc20
--- /dev/null
+++ b/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md
@@ -0,0 +1,32 @@
+# Data Issue with Resolutions Center Manual Forms
+
+### Context
+
+We currently extract data from the **Resolutions Center**, stored in **Cosmos DB**, and use it to build models and tables in the **DWH**. These are then consumed by some reports, including the **Resolution Incidents** report.
+
+Historically, based on initial conversations with the Resolutions team (especially Manu), we established that certain fields, particularly **Guest**, **Host**, and **Booking** data, were **mandatory** for all resolution records. Our models and data tests were built with this assumption, and we have strict tests to flag missing values for these fields.
+
+### Current Issue
+
+Recently, we have been receiving alerts from our data tests, flagging missing mandatory fields. Upon further investigation, Ant informed us that some **manual forms** created within the Resolutions Center (used for external partners like **Guesty** and **Holidu**) may not contain complete Guest, Host, or Booking information.
+
+This is a change from our initial understanding and introduces exceptions to the previously enforced rules.
+
+### Implications
+
+1. **Data Tests**
+ - Our current tests assume these fields must always be present.
+ - We will need to update them to **exclude manual forms** from this requirement (e.g., using a flag or identifying logic for manual entries).
+2. **Model Logic**
+ - We need to verify if downstream models depend on the presence of these fields.
+ - Any logic assuming complete Guest/Host/Booking data should be reviewed for potential failure or bias.
+3. **Resolution Incident Report**
+ - Check if the **Resolution Incidents report** is affected by these manual forms.
+ - If so, we may need to add filters, fallback logic, or a dedicated section for manual forms.
+
+### Next Steps
+
+- [ ] Update data tests to handle manual forms differently.
+- [ ] Review downstream model dependencies and document any required changes.
+- [ ] Assess the impact on the Resolution Incident report and plan necessary adjustments.
+- [ ] Align with Resolutions team to confirm any other exceptions or changes in data expectations.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md:Zone.Identifier b/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data Issue with Resolutions Center Manual Forms 2160446ff9c9804ba5d7f8cf8d3b73ea.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md b/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md
new file mode 100644
index 0000000..6ed53fa
--- /dev/null
+++ b/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md
@@ -0,0 +1,24 @@
+# Data Requirements for New Dash/New Pricing
+
+# Revenue reporting
+
+We need to report revenue at different aggregation levels. For that, we need a source of truth on billing at service level. This should be append-only, and record any change on the applied services over time, so we can replicate the exact reality of a service applied in any point in time.
+
+After the new release, we need confirmation that:
+
+1. `ProductServiceBillingItem` and `ProductProtectionPlanBillingItem` are the go-to tables for the source of truth
+2. That the abovementioned tables will contain future and any historical record that needs to be charged for data completeness
+3. That the abovementioned tables are append-only, despite having an `UpdatedDate` field.
+4. That the abovementioned tables will reflect any change by adding a negative line that removes previous lines if a change needs to be applied in a Billing Item (and perhaps even a reference to what is getting cancelled out?)
+
+# Excluding test users
+
+All real users should have a Deal ID. All fake/test/demo/etc users should NOT have a Deal Id.
+
+We cannot exclude them until the above conditions are filled, once it’s done, we will exclude all users that do not have a Deal Id.
+
+We need to be notified once this change is in place to modify the code in Data to reflect the users exclusion accordingly.
+
+# Backfill BookingViewToService with the missing Ids
+
+For some historical cases on `BookingViewToService` that affect Basic Screening service we do not have the `ProtectionPlanId` nor the `ProductServiceId`. In order to guarantee consistency in DWH, having these filled would simplify our work and allow to do some proper data tests.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md:Zone.Identifier b/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data Requirements for New Dash New Pricing 1420446ff9c980eaab6ec6cb02714557.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md b/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md
new file mode 100644
index 0000000..9876ddf
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md
@@ -0,0 +1,70 @@
+# Data Team Handbook
+
+Welcome to the Data Team Handbook. The handbook is a reference for Data Team members to rely on for topics such as our systems, processes, tools, conventions, etc.
+
+When in doubt on how something works, come here. And if you spot a gap, feel free to fill it in.
+
+This Handbook is inspired by [the awesome example set by Gitlab](https://handbook.gitlab.com/handbook/business-technology/data-team/). If you are unsure on how to structure this handbook, that is a great reference to grab ideas from.
+
+# Data Team Members
+
+| Pablo (pablo.martin@superhog.com) | Lead Data Engineer |
+| --- | --- |
+| Uri
+(oriol.roque@superhog.com) | Lead Data Analyst |
+| Joaquín
+(joaquin.ossa@superhog.com) | Data Analyst |
+
+# Goals and duties
+
+🏗️ **WIP**
+
+# Main processes and lines of work
+
+🏗️ **WIP**
+
+- Green flag plan
+ - Infra maintenance
+ - Data Pipelines
+ - Data Counter
+- Stable work
+- Adhoc and one-shot research
+- Demand management
+
+# Data Platform
+
+🏗️ **WIP**
+
+- Data Catalogue
+ - What is it
+ - How to use it
+- Datawarehouse
+ - Architecture and infra
+- Airbyte
+- dbt project
+- PowerBI
+- Other infra topics
+
+# How-to
+
+🏗️ **WIP**
+
+- PBI
+ - Create a new report
+ - Update an existing report/app
+ - Rollback
+ - Provide colleagues with access to reports
+- dbt
+ - Developer environment
+ - Work on the project
+- DWH
+ - Connect
+ - Monitor
+ - Manage access
+- Airbyte
+ - Monitor
+ - Connect
+- Other infra topics
+ - Connect with VPN
+
+# North-stars
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md:Zone.Identifier b/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Handbook c155c34522904b438eb93c7367bedfa3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md b/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md
new file mode 100644
index 0000000..f5f0b6d
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md
@@ -0,0 +1,75 @@
+# Data Team Internals
+
+[Data Team Handbook](Data%20Team%20Handbook%20c155c34522904b438eb93c7367bedfa3.md)
+
+[Onboarding checklist](Onboarding%20checklist%20d5eb8cb36b404fc9a0ccacddf9862001.md)
+
+[Data Team Weeklies ](Data%20Team%20Weeklies%202160446ff9c980dcb037c8ebac156219.md)
+
+[Sync Meeting Notes](Sync%20Meeting%20Notes%201830446ff9c9809d89a0e8a5321b1697.md)
+
+[How-tos and tips](How-tos%20and%20tips%206fa0131e44854e7aadfab0f837de9276.md)
+
+[Our repos & monitoring](Our%20repos%20&%20monitoring%20601c169a59e4469ca38d5493a6356bc1.md)
+
+[Retrospectives](Retrospectives%20ab52ef5a73a040b6aa9a07121b0e0aac.md)
+
+[Incident Management](Incident%20Management%204829884213d744d4884be6c53988e696.md)
+
+[Quarterlies](Quarterlies%20959c63c9ec3641ab928840488f852b8e.md)
+
+[SSH Pubkeys](SSH%20Pubkeys%208aecd0622caf4512a22ee099ff49f208.md)
+
+[Business KPIs Documentation](Business%20KPIs%20Documentation%20292c74f608eb46d8b9887e239046cc87.md)
+
+[New Pricing + New Dashboard (from Data POV)](New%20Pricing%20+%20New%20Dashboard%20(from%20Data%20POV)%201130446ff9c980ea8790e6ab500d3683.md)
+
+[Payment Validation Set data problems](Payment%20Validation%20Set%20data%20problems%202382b2ecb24243449caac4687f044391.md)
+
+[HubSpot Data Integration](HubSpot%20Data%20Integration%201120446ff9c980439236e387507aa476.md)
+
+[Glad you’re back!](Glad%20you%E2%80%99re%20back!%201130446ff9c98005a326f52608abfd91.md)
+
+[DE survival without Pablo](DE%20survival%20without%20Pablo%202881b273bcaa46e4b0ae5fac1c1ba728.md)
+
+[Orchestration Engine Project](Orchestration%20Engine%20Project%20a10527d3c6144b58baf202cbeb657daa.md)
+
+[Invoice Screen & Protect](Invoice%20Screen%20&%20Protect%201610446ff9c980f88de6d6293b4fab03.md)
+
+[Cool tools](Cool%20tools%20afdf8f69b4b0498aaee66ad1a520cc0d.md)
+
+[Guerrilla engineering before Pablo’s temporarily leaves](Guerrilla%20engineering%20before%20Pablo%E2%80%99s%20temporarily%20l%2015a0446ff9c98068a3d0efdb31680f95.md)
+
+[20241217 - Long-term Data topics with Rich](20241217%20-%20Long-term%20Data%20topics%20with%20Rich%2015f0446ff9c980bfb932ee563ba1b25e.md)
+
+[20241218 - Ways of working with Matt](20241218%20-%20Ways%20of%20working%20with%20Matt%201600446ff9c9801fa112d0ff4a431667.md)
+
+[dbt 1.7 to 1.9 upgrade](dbt%201%207%20to%201%209%20upgrade%201740446ff9c98054915fd620df86339a.md)
+
+[dbt 1.9.1 to 1.9.8 upgrade](dbt%201%209%201%20to%201%209%208%20upgrade%202100446ff9c980cbaa01e84c22bdd13c.md)
+
+[Data issues collab. with squads](Data%20issues%20collab%20with%20squads%2017e0446ff9c980a2b993d895eef6d804.md)
+
+[HTVR Invoicing explainer](HTVR%20Invoicing%20explainer%201c10446ff9c9801ca39ad230f8931139.md)
+
+[Churning Deals Warning – Early Alert System](Churning%20Deals%20Warning%20%E2%80%93%20Early%20Alert%20System%201d00446ff9c98056b4f9fb5177e4e64d.md)
+
+[20250414 Old Dash Invoicing - Exclude New Dash data](20250414%20Old%20Dash%20Invoicing%20-%20Exclude%20New%20Dash%20dat%201d50446ff9c9807aa1edcca0c9e97082.md)
+
+[Analysis: Potential Guest Revenue Loss – Airbnb Bookings](Analysis%20Potential%20Guest%20Revenue%20Loss%20%E2%80%93%20Airbnb%20Boo%201d70446ff9c98085be28f77ba41e7e9f.md)
+
+[Finance Workflow Change – Host Resolution Payments Handling](Finance%20Workflow%20Change%20%E2%80%93%20Host%20Resolution%20Payments%201de0446ff9c980ccb021fa75288129b0.md)
+
+[Credit Notes Downstream Analysis](Credit%20Notes%20Downstream%20Analysis%201de0446ff9c980d9a0c1fd9477d7fcb7.md)
+
+[Request for ideas [Data Team]](Request%20for%20ideas%20%5BData%20Team%5D%201e30446ff9c980bc80b1fbd141fb25c4.md)
+
+[Guest Products DWH Refactor Ramblings](Guest%20Products%20DWH%20Refactor%20Ramblings%201ec0446ff9c98055872fc4c29b23e40e.md)
+
+[Finance Reporting App](Finance%20Reporting%20App%201f40446ff9c980699121cb0d804a65e6.md)
+
+[Reproducing versioning bug in dbt](Reproducing%20versioning%20bug%20in%20dbt%202100446ff9c98034902fe1c7080b3698.md)
+
+[Data Issue with Resolutions Center Manual Forms](Data%20Issue%20with%20Resolutions%20Center%20Manual%20Forms%202160446ff9c9804ba5d7f8cf8d3b73ea.md)
+
+[Backing up WG Hub config](Backing%20up%20WG%20Hub%20config%202260446ff9c9808b8cd9ecc144ce7106.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md:Zone.Identifier b/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Internals cf0d13d49c9643a987527e1fe2f65d49.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md b/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md
new file mode 100644
index 0000000..9640be0
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md
@@ -0,0 +1,9 @@
+# Data Team Weeklies
+
+[2025-07-09 - Data Team Weekly](2025-07-09%20-%20Data%20Team%20Weekly%2022b0446ff9c98090baa0fdb0e60ca7bd.md)
+
+[2025-07-02 - Data Team Weekly](2025-07-02%20-%20Data%20Team%20Weekly%202240446ff9c980c68f88faf0087fad5e.md)
+
+[2025-06-25 - Data Team Weekly](2025-06-25%20-%20Data%20Team%20Weekly%2021d0446ff9c980e1b064fc64705671f7.md)
+
+[2025-06-18 - Data Team Weekly](2025-06-18%20-%20Data%20Team%20Weekly%202160446ff9c980ec8291d85f78e3d29f.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md:Zone.Identifier b/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data Team Weeklies 2160446ff9c980dcb037c8ebac156219.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md b/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md
new file mode 100644
index 0000000..684adab
--- /dev/null
+++ b/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md
@@ -0,0 +1,11 @@
+# Data issues collab. with squads
+
+This page explains how we handle data *issues* spotted by the Data team with the different squad teams.
+
+A data issue is an occurrence of data in a source system not being as expected. This could include:
+
+- Data models that have changed unexpectedly (think of, we’ve deleted a key column without coordinating across teams).
+- Invalid data due to technical constraints (think of, this column should be unique but suddenly it isn’t).
+- Data that is clearly wrong business wise (even if it’s valid technically)
+
+The scope of these issues is pretty much SQL Server and the multiple Cosmos DB databases we use for our applications.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md:Zone.Identifier b/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data issues collab with squads 17e0446ff9c980a2b993d895eef6d804.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md b/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md
new file mode 100644
index 0000000..8b5f208
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md
@@ -0,0 +1,576 @@
+# Data quality assessment: Billable Bookings
+
+**2024-07-17, by Uri**
+
+This page aims to document the differences in Billable Bookings observed from Finance point of view vs. DWH point of view. This originates from the fact that, wanting to expose this KPI through DWH prove to be inconsistent with the billable bookings coming from Finance side.
+
+# Finance side
+
+Finance retrieves the Billable Bookings from the monthly accounting reports, that are generated usually on the first working day of each month by the project [data-invoicing-exporter](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter). Specifically, they retrieve it from the **Account Reports Summary**, summing the tab of **BillableBookingCount**. Jamie provided the file they were using for June 2024 so I (Uri) could dig more into this. There was no amendment made this month.
+
+The number of billable bookings according to Finance in June 2024 is: **25,538**
+
+# DWH side
+
+The billable bookings should come from the table **int_core__booking_charge_events**. Mainly, there’s 2 ways of charge a booking: either when the **verification** starts, or when the **check-in** starts.
+
+The verification start date that is used for the logic has changed recently (mid June 24) because of the new estimated Guest Journey Start Date, that is when the link is used. Here you can see the changes:
+
+- [Commit 4f672800 - Change on int_core__verification_requests](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/4f6728003a30f7188f10c48cc60eb2191d604907?refName=refs/heads/master&path=/models/intermediate/core/int_core__verification_requests.sql&_a=compare)
+- [Commit 4f672800 - Change on int_core__booking_charge_events](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/commit/4f6728003a30f7188f10c48cc60eb2191d604907?refName=refs%2Fheads%2Fmaster&path=%2Fmodels%2Fintermediate%2Fcore%2Fint_core__booking_charge_events.sql&_a=compare)
+
+Therefore, before this change, the booking charge events was considered a verification start that corresponded to **joined_at_utc** coming from the **int_core__unified_user** table.
+
+In a nutshell, running this on 17th July 2024 for the month of June, the values retrieved are:
+
+- Using verification estimated start date (link used at): **23,440**
+- Using verification guest joined date: **21,604**
+
+### Query available here
+
+```sql
+with
+booking_charge_events_joined_at as (
+with
+ stg_core__booking as (select * from staging.stg_core__booking),
+ int_core__price_plans as (select * from intermediate.int_core__price_plans),
+ int_core__verification_requests as (
+ select * from intermediate.int_core__verification_requests
+ ),
+ booking_with_relevant_price_plans as (
+ select
+ *,
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then icuu.joined_at_utc
+ end as booking_fee_charge_at_utc
+ from stg_core__booking b
+ left join
+ int_core__verification_requests vr
+ on b.id_verification_request = vr.id_verification_request
+ left join intermediate.int_core__unified_user icuu
+ on vr.id_user_guest = icuu.id_user
+ left join int_core__price_plans pp on b.id_user_host = pp.id_user_host
+ where
+ -- The dates that defines which price plan applies to the booking depends
+ -- on charged by type. With the below case, we evaluate if a certain price
+ -- plan relates to the booking
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc between pp.start_at_utc and pp.end_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then
+ coalesce(
+ (
+ icuu.joined_at_utc
+ between pp.start_at_utc and pp.end_at_utc
+ ),
+ false
+ )
+ else false
+ end
+ = true
+ ),
+ untied_bookings as (
+ -- If a booking has two valid price plans, take the earliest
+ select id_booking, min(id_price_plan) as id_price_plan
+ from booking_with_relevant_price_plans brpp
+ group by id_booking
+ )
+
+select
+ ub.id_booking,
+ ub.id_price_plan,
+ brpp.booking_fee_local,
+ brpp.booking_fee_charge_at_utc,
+ cast(brpp.booking_fee_charge_at_utc as date) as booking_fee_charge_date_utc
+from untied_bookings ub
+left join
+ booking_with_relevant_price_plans brpp
+ on ub.id_booking = brpp.id_booking
+ and ub.id_price_plan = brpp.id_price_plan
+
+),
+booking_charge_events_estimated_at as (
+with
+ stg_core__booking as (select * from staging.stg_core__booking),
+ int_core__unified_user as (select * from intermediate.int_core__unified_user),
+ int_core__price_plans as (select * from intermediate.int_core__price_plans),
+ int_core__verification_requests as (
+ select * from intermediate.int_core__verification_requests
+ ),
+ booking_with_relevant_price_plans as (
+ select
+ *,
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then vr.verification_estimated_started_at_utc
+ end as booking_fee_charge_at_utc
+ from stg_core__booking b
+ left join
+ int_core__verification_requests vr
+ on b.id_verification_request = vr.id_verification_request
+ left join int_core__price_plans pp on b.id_user_host = pp.id_user_host
+ where
+ -- The dates that defines which price plan applies to the booking depends
+ -- on charged by type. With the below case, we evaluate if a certain price
+ -- plan relates to the booking
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc between pp.start_at_utc and pp.end_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then
+ coalesce(
+ (
+ vr.verification_estimated_started_at_utc
+ between pp.start_at_utc and pp.end_at_utc
+ ),
+ false
+ )
+ else false
+ end
+ = true
+ ),
+ untied_bookings as (
+ -- If a booking has two valid price plans, take the earliest
+ select id_booking, min(id_price_plan) as id_price_plan
+ from booking_with_relevant_price_plans brpp
+ group by id_booking
+ )
+
+select
+ ub.id_booking,
+ ub.id_price_plan,
+ brpp.booking_fee_local,
+ brpp.booking_fee_charge_at_utc,
+ cast(brpp.booking_fee_charge_at_utc as date) as booking_fee_charge_date_utc
+from untied_bookings ub
+left join
+ booking_with_relevant_price_plans brpp
+ on ub.id_booking = brpp.id_booking
+ and ub.id_price_plan = brpp.id_price_plan
+
+)
+select
+ 'verification_estimated_at' as type,
+ COUNT(distinct id_booking) as billable_bookings
+from booking_charge_events_estimated_at
+where date_trunc('month', booking_fee_charge_date_utc) = '2024-06-01'
+union all
+select
+ 'verification_guest_joined_at' as type,
+ COUNT(distinct id_booking) as billable_bookings
+from booking_charge_events_joined_at
+where date_trunc('month', booking_fee_charge_date_utc) = '2024-06-01'
+```
+
+# First volume check
+
+Well, given that there’s clearly a difference between Finance numbers and what we can obtain from DWH (in either case…), before jumping into some python-based project that runs queries in the Core schema directly… maybe let’s do some simple checks.
+
+The Account Reports Summary excel is split per Company name, and it should be easy to retrieve the Deal Id linked to the company name (at least manually) for the most notable cases. So I just ordered the file in descending order of billable bookings and retrieved the first one:
+
+- Company Name: LavandaBilling Account
+- Billable Bookings: 3,178
+
+This has a couple of id_deals linked to it: '1604445496','20529225110’
+
+
+
+### Query available here
+
+```sql
+select id_user as id_user_host, id_deal, company_name
+from intermediate.int_core__unified_user icuu
+where id_deal IN ('1604445496','20529225110')
+```
+
+The good news is that we have the KPIs view by Deal, so we can retrieve the information for these 2 Deals super fast. It will be considering the estimated start date, as it’s the current behaviour for DWH.
+
+The bad news is that… well… there’s no billable bookings at all!
+
+
+
+### Query available here
+
+```sql
+select
+ *
+from intermediate.int_monthly_aggregated_metrics_history_by_deal
+where date = '2024-06-30'
+and id_deal in ('20529225110','1604445496')
+order by date desc
+```
+
+So clearly there’s a big difference hiding somewhere.
+
+Do we have a similar behaviour if using the **joined_at_utc** for verification start in **booking_charge_events**?
+
+… well, yes. Exactly 0 billable bookings!
+
+
+
+### Query available here
+
+```sql
+with
+booking_charge_events_joined_at as (
+with
+ stg_core__booking as (select * from staging.stg_core__booking),
+ int_core__price_plans as (select * from intermediate.int_core__price_plans),
+ int_core__verification_requests as (
+ select * from intermediate.int_core__verification_requests
+ ),
+ booking_with_relevant_price_plans as (
+ select
+ *,
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then icuu.joined_at_utc
+ end as booking_fee_charge_at_utc
+ from stg_core__booking b
+ left join
+ int_core__verification_requests vr
+ on b.id_verification_request = vr.id_verification_request
+ left join intermediate.int_core__unified_user icuu
+ on vr.id_user_guest = icuu.id_user
+ left join int_core__price_plans pp on b.id_user_host = pp.id_user_host
+ where
+ -- The dates that defines which price plan applies to the booking depends
+ -- on charged by type. With the below case, we evaluate if a certain price
+ -- plan relates to the booking
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc between pp.start_at_utc and pp.end_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then
+ coalesce(
+ (
+ icuu.joined_at_utc
+ between pp.start_at_utc and pp.end_at_utc
+ ),
+ false
+ )
+ else false
+ end
+ = true
+ ),
+ untied_bookings as (
+ -- If a booking has two valid price plans, take the earliest
+ select id_booking, min(id_price_plan) as id_price_plan
+ from booking_with_relevant_price_plans brpp
+ group by id_booking
+ )
+
+select
+ ub.id_booking,
+ ub.id_price_plan,
+ brpp.booking_fee_local,
+ brpp.booking_fee_charge_at_utc,
+ cast(brpp.booking_fee_charge_at_utc as date) as booking_fee_charge_date_utc
+from untied_bookings ub
+left join
+ booking_with_relevant_price_plans brpp
+ on ub.id_booking = brpp.id_booking
+ and ub.id_price_plan = brpp.id_price_plan
+
+),
+booking_charge_events_estimated_at as (
+with
+ stg_core__booking as (select * from staging.stg_core__booking),
+ int_core__unified_user as (select * from intermediate.int_core__unified_user),
+ int_core__price_plans as (select * from intermediate.int_core__price_plans),
+ int_core__verification_requests as (
+ select * from intermediate.int_core__verification_requests
+ ),
+ booking_with_relevant_price_plans as (
+ select
+ *,
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then vr.verification_estimated_started_at_utc
+ end as booking_fee_charge_at_utc
+ from stg_core__booking b
+ left join
+ int_core__verification_requests vr
+ on b.id_verification_request = vr.id_verification_request
+ left join int_core__price_plans pp on b.id_user_host = pp.id_user_host
+ where
+ -- The dates that defines which price plan applies to the booking depends
+ -- on charged by type. With the below case, we evaluate if a certain price
+ -- plan relates to the booking
+ case
+ when pp.price_plan_charged_by_type = 'CheckInDate'
+ then b.check_in_at_utc between pp.start_at_utc and pp.end_at_utc
+ when pp.price_plan_charged_by_type in ('VerificationStartDate', 'All')
+ then
+ coalesce(
+ (
+ vr.verification_estimated_started_at_utc
+ between pp.start_at_utc and pp.end_at_utc
+ ),
+ false
+ )
+ else false
+ end
+ = true
+ ),
+ untied_bookings as (
+ -- If a booking has two valid price plans, take the earliest
+ select id_booking, min(id_price_plan) as id_price_plan
+ from booking_with_relevant_price_plans brpp
+ group by id_booking
+ )
+
+select
+ ub.id_booking,
+ ub.id_price_plan,
+ brpp.booking_fee_local,
+ brpp.booking_fee_charge_at_utc,
+ cast(brpp.booking_fee_charge_at_utc as date) as booking_fee_charge_date_utc
+from untied_bookings ub
+left join
+ booking_with_relevant_price_plans brpp
+ on ub.id_booking = brpp.id_booking
+ and ub.id_price_plan = brpp.id_price_plan
+
+)
+select
+ 'verification_estimated_at' as type,
+ COUNT(distinct bce.id_booking) as billable_bookings
+from booking_charge_events_estimated_at bce
+inner join intermediate.int_core__bookings b
+ on bce.id_booking = b.id_booking
+inner join intermediate.int_core__unified_user u
+ on b.id_user_host = u.id_user
+ and u.id_deal in ('1604445496','20529225110')
+where date_trunc('month', bce.booking_fee_charge_date_utc) = '2024-06-01'
+union all
+select
+ 'verification_guest_joined_at' as type,
+ COUNT(distinct bce.id_booking) as billable_bookings
+from booking_charge_events_joined_at bce
+inner join intermediate.int_core__bookings b
+ on bce.id_booking = b.id_booking
+inner join intermediate.int_core__unified_user u
+ on b.id_user_host = u.id_user
+ and u.id_deal in ('1604445496','20529225110')
+where date_trunc('month', bce.booking_fee_charge_date_utc) = '2024-06-01'
+```
+
+So clearly we have a situation here. Thankfully, Clay took the time to explain to me that this company is a special case in which these bookings do not have Guest Journeys, so looks suspicious to me in the sense that probably this is affecting somehow the logic in DWH. This is because it’s an “autohost account”, that might probably come from Partner API.
+
+I also tried with the 2nd Company with more Billable Bookings in June 2024, namely:
+
+- Company Name: Home Team Vacation Rentals LLC
+- Billable Bookings: 1,350
+- Deal Id: ‘15463726437’
+
+Reusing the previous query and changing the deal id, we get these results:
+
+
+
+So it’s looking MUCH better, even though it’s true that we miss/exceed the 1,350 for a few bookings. Let’s be pragmatic and focus first on the big issues, and then on the small ones…
+
+# Debugging data-invoicing-exporter
+
+I’ve already put some comments on the DevOps ticket, but might be worth to aim to understand where these Lavanda Billable Bookings come from.
+
+Billable Booking Count is computed in `dashboard_tools.py` [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/dashboard_tools.py&version=GBmain&line=122&lineEnd=123&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), which depends on `user_report_contents` [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/dashboard_tools.py&version=GBmain&line=81&lineEnd=82&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents). In turn, this is retrieved from the function `get_user_data`, defined [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/dashboard_tools.py&version=GBmain&line=186&lineEnd=187&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+
+From here, we can retrieve the query being used on bookings `queries.get_query_booking_data_of_user` [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/dashboard_tools.py&version=GBmain&line=200&lineEnd=201&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents).
+
+Time to switch to queries.py, where the query we’re interested in is [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=126&lineEnd=127&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents). This query has some transactional sql.
+
+Firstly, [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=140&lineEnd=141&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), it creates a snapshot of the latest available `PricePlanToUser` at the moment of the extraction, **to be applied to all the bookings of the exporting time period considered**. These are the ones that are going to be used for the export. This is, technically, different from what we do in DWH in which **we apply the price plan according on what was active at the moment of verification start or check-in**. In case of more than 1 active price plan for a booking, we take the first one. This could explain changes on the volumes, but probably not the massive difference. We can check this later on.
+
+Secondly, [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=166&lineEnd=167&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), it somehow creates a unique booking table based on Guest, Accommodation and CheckIn date. This is very similar on how we handle it in DWH in `int_core__duplicate_booking` model. To keep in mind though that we’re not forcing this duplicate booking in DWH for booking charge events.
+
+Thirdly, [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=174&lineEnd=175&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), it computes a variable called `ListingStartDateByField` (which looks like a mistake in the name) that mainly retrieves the `PricePlanChargedByTypeId`. This comes from the snapshot made on point 1.
+
+The last part is mainly a split on whether the price plan charge by type id is =3 or not. This for human beings means a different logic between A) if the billing needs to be considered in the CheckInDate and B) if it needs to be considered in the VerificationStartDate or All.
+
+
+
+This is quite similar as we do in DWH in Booking Charge Events, that we just filter by the name instead of the ID.
+
+These queries are massive monsters with around ~10 tables joined. However, these are all left joins (except for the unique bookings mentioned in the second point), so either the booking does not exist in Bookings or the difference lies in the WHERE clauses. For the sake of me not getting a stroke, I hope it’s the second, so let’s go.
+
+Billable at check in [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=180&lineEnd=181&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)
+
+- UserVerificationStatusId IS NOT NULL, from SuperhogUser table, understanding this as the guest user because of how it’s joined
+- GuestUserId IS NOT NULL, from User table
+- CreatedByUserId equals the user id we’re retrieving
+- CheckIn date is between Extraction Start and Extraction End, in a nutshell
+
+Else (meaning billable at verification start ) [here](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter?path=/sh_invoicing/queries.py&version=GBmain&line=237&lineEnd=238&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents)
+
+- UserVerificationStatusId IS NOT NULL, from SuperhogUser table, understanding this as the guest user because of how it’s joined
+- GuestUserId IS NOT NULL, from User table
+- CreatedByUserId equals the user id we’re retrieving
+- and at least one of these conditions needs to be true
+ - If the VerificationRequestId from Booking table IS NOT NULL then the UpdatedDate from the VerificationRequest needs to be between Extraction Start and Extraction End
+ - If the VerificationRequestId from Booking table IS NULL then the JoinDate from the User table (Guest) needs to be between Extraction Start and Extraction End
+
+If checking this versus what DWH is computing… well, it seems we do not have (or ever had) the UpdatedDate condition for the verification start. It would make sense that the main problem for the Lavanda subject comes from this point, after all, this condition only applies if VerificationRequestId is not filled, meaning it’s a booking that does not have a verification process because we ‘trust’ autohost verifications, and that would be in line with what Clay explained.
+
+Let’s run a quick query to replicate the verification request being null/not null in DWH for this case. I’ll also retrieve the latest price plan for the users assigned to the Id Deals.
+
+### Query here!
+
+```sql
+with pp as (
+select pp.*
+from intermediate.int_core__price_plans pp
+inner join intermediate.int_core__unified_user uu
+on pp.id_user_host = uu.id_user
+and uu.id_deal in ('20529225110', '1604445496')
+where pp.end_date_utc >= '2024-07-01' and pp.start_date_utc < '2024-07-01'
+)
+select
+ host.id_deal,
+ b.id_user_host,
+ count(distinct b.id_booking) as cd_booking,
+ count(1) as count
+from
+ intermediate.int_core__bookings b
+inner join intermediate.int_core__unified_user host
+ on
+ b.id_user_host = host.id_user
+inner join intermediate.int_core__unified_user guest
+ on
+ b.id_user_guest = guest.id_user
+left join intermediate.int_core__verification_requests vr
+ on
+ b.id_verification_request = vr.id_verification_request
+left join pp on pp.id_user_host = b.id_user_host
+where
+ guest.id_user_verification_status is not null
+ and guest.id_user is not null
+ and host.id_deal in ('20529225110', '1604445496')
+ and pp.id_price_plan <> 3
+ and
+ (
+ b.id_verification_request is null
+ and date_trunc('month',
+ guest.joined_date_utc) = '2024-06-01'
+ or
+ b.id_verification_request is not null
+ and date_trunc('month',
+ vr.updated_date_utc) = '2024-06-01'
+ )
+group by 1 ,2
+order by cd_booking desc
+```
+
+The result of the query is the following:
+
+
+
+and as you can see it’s quite close the Lavanda billable bookings from Finance:
+
+
+
+At this stage I wonder if there’s 1 day difference in the period considered between the DWH query vs. Finance export; or if the data is exactly extracted over the last month.
+
+By removing the filter on the deal id and including the 2 types of billing, meaning at verification start and at check in, we get:
+
+
+
+Which is quite close to the 25,538 billable bookings from Finance. The fact that cd_booking and count display different numbers might be worth to double check to ensure there’s no duplicates.
+
+
+
+### Query here!
+
+```sql
+with pp as (
+select pp.*
+from intermediate.int_core__price_plans pp
+inner join intermediate.int_core__unified_user uu
+on pp.id_user_host = uu.id_user
+where pp.end_date_utc >= '2024-07-01' and pp.start_date_utc < '2024-07-01'
+),
+verification_start as (
+select
+ host.id_deal,
+ b.id_user_host,
+ 'verification_start' as type,
+ count(distinct b.id_booking) as cd_booking,
+ count(1) as count
+from
+ intermediate.int_core__bookings b
+inner join intermediate.int_core__unified_user host
+ on
+ b.id_user_host = host.id_user
+inner join intermediate.int_core__unified_user guest
+ on
+ b.id_user_guest = guest.id_user
+left join intermediate.int_core__verification_requests vr
+ on
+ b.id_verification_request = vr.id_verification_request
+left join pp on pp.id_user_host = b.id_user_host
+where
+ guest.id_user_verification_status is not null
+ and guest.id_user is not null
+ and pp.id_price_plan <> 3
+ and
+ (
+ b.id_verification_request is null
+ and date_trunc('month',
+ guest.joined_date_utc) = '2024-06-01'
+ or
+ b.id_verification_request is not null
+ and date_trunc('month',
+ vr.updated_date_utc) = '2024-06-01'
+ )
+group by 1,2,3
+),
+check_in as (
+select
+ host.id_deal,
+ b.id_user_host,
+ 'check-in' as type,
+ count(distinct b.id_booking) as cd_booking,
+ count(1) as count
+from
+ intermediate.int_core__bookings b
+inner join intermediate.int_core__unified_user host
+ on
+ b.id_user_host = host.id_user
+inner join intermediate.int_core__unified_user guest
+ on
+ b.id_user_guest = guest.id_user
+left join intermediate.int_core__verification_requests vr
+ on
+ b.id_verification_request = vr.id_verification_request
+left join pp on pp.id_user_host = b.id_user_host
+where
+ guest.id_user_verification_status is not null
+ and guest.id_user is not null
+ and pp.id_price_plan = 3
+ and b.id_verification_request is not null
+ and date_trunc('month',
+ b.check_in_at_utc) = '2024-06-01'
+group by 1,2,3
+),
+totals as (
+select * from check_in
+union all
+select * from verification_start
+)
+select
+ type,
+ sum(cd_booking) as cd_booking,
+ sum(count) as count
+from totals
+group by 1
+```
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md:Zone.Identifier b/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Billable Bookings 97008b7f1cbb4beb98295a22528acd03.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md b/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md
new file mode 100644
index 0000000..d194d6f
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md
@@ -0,0 +1,307 @@
+# Data quality assessment: DWH vs. Finance revenue figures
+
+**From 2024-07-22 to 2024-07-24, by Uri**
+
+This page aims to document the differences in Revenue figures observed from Finance point of view vs. DWH point of view. This originates from Suzannah’s request to validate the revenue figures displayed.
+
+# Summary
+
+> Revenue figures are not fully consistent between Data (DWH) side and Finance. Xero-based reporting is generally ok, with some small discrepancies that have minimal impacts and can partially be explained. However, a massive discrepancy on Guest Revenue (Waivers, Deposit Fees, Guest Products) is detected since Data is reporting it with taxes included, while Finance seems to be reporting it without taxes. This generates discrepancies within Data reporting, in the sense that Xero-based reporting/metrics are usually tax exclusive. The issue is communicated on 24th July to all users, waiting for a decision/action on how to proceed next once Pablo & Suzannah are back from holidays.
+>
+
+The following sections refer to the investigation carried out by @Oriol (Uri) since July 22nd.
+
+# Finance side
+
+Finance has a file with the different revenue sources, its aggregations and final net revenues. It is divided month by month. After the launch of check-in hero, there’s a new tab since April 2024 to contemplate the Guest Products revenue line, thus information is split in two sheets: from April 2023 to March 2024 and from April 2024 onwards. At the moment of writing this page, the nearest fully completed month was May 2024. The file we’re considering was shared with Uri on 19th July 2024.
+
+On Data team side, please refer to Uri to have access to the file since might not necessarily be of public knowledge, and therefore it won’t be published in this page.
+
+# DWH side
+
+We call “DWH side” anything that is computed in the DWH by the Data team. This means, concretely, the information that is displayed in Business Overview Power Bi application, and other information available in substeps within the DWH. We will further differentiate between “KPIs”, referring to those figures that come from the “Global KPIs view” - meaning `mtd` models vs. what is already shared in the other reports of business overview on Guest Payments, Host Fees and e-deposit. In a nutshell:
+
+
+
+where selected reports in Red were already available before the KPIs, and Green refers to the new data available in the KPIs initiative. Since the KPIs initiative is quite new, we do not discard that there might be inconsistencies between these 2 areas - although, if they exist, they should be minimal.
+
+# First validation - Finance vs. Main KPIs
+
+This first validation was conducted on July 22nd, afternoon. This is important as we’ll see in future steps.
+
+The validation consists of the following steps:
+
+- Retrieve the historical data revenue figures from business KPIs directly from DWH. This data comes from the table `int_mtd_vs_previous_year_metrics` mostly because this one contains more metrics computed than what is displayed at the moment in the reports. See below the query.
+ - Query here!
+
+ ```sql
+ select
+ date,
+ xero_booking_net_fees_in_gbp,
+ xero_listing_net_fees_in_gbp,
+ xero_verification_net_fees_in_gbp,
+ xero_operator_net_fees_in_gbp,
+ xero_waiver_net_fees_in_gbp,
+ xero_apis_net_fees_in_gbp,
+ xero_e_deposit_net_fees_in_gbp,
+ xero_guesty_net_fees_in_gbp,
+ xero_host_resolution_amount_paid_in_gbp,
+ total_guest_revenue_in_gbp,
+ total_revenue_in_gbp
+
+ from intermediate.int_mtd_vs_previous_year_metrics imvpym
+ where date between '2023-04-30' and '2024-05-31'
+ ```
+
+- Export it and copy-paste transposed to the same Excel file as we have from Finance, in a dedicated tab called Data figures (22nd July)
+- Make the equivalence of granularities and naming between what is reported from Finance side vs. what we’re reporting on KPIs side from DWH. Find below the equivalence table, to the best of my knowledge.
+ - Equivalence table
+
+
+ | Common Name | DWH | Finance |
+ | --- | --- | --- |
+ | Booking fees | xero_booking_net_fees_in_gbp | Booking fees |
+ | Listing fees | xero_listing_net_fees_in_gbp | Listing fees |
+ | Verification fees | xero_verification_net_fees_in_gbp | Verification fees |
+ | E-deposit fees | xero_e_deposit_net_fees_in_gbp | E-Deposit Fees |
+ | Guesty | xero_guesty_net_fees_in_gbp | Guesty |
+ | Damage waivers payments | xero_waiver_net_fees_in_gbp | Damage waiver payments |
+ | Guest Revenue | total_guest_revenue_in_gbp | Damage Waivers + Deposit Handling + Guest products + Damage waiver payments |
+ | Operator Revenue | xero_oeprator_net_fees_in_gbp | Booking fees + Listing fees + Verification fees |
+ | APIs Revenue | xero_apis_net_fees_in_gbp | E-Deposit Fees + Guesty |
+ | Total Revenue | total_revenue_in_gbp | Damage Waivers + Deposit Handling + Guest products + Damage waiver payments + Booking fees + Listing fees + Verification fees + E-Deposit Fees + Guesty |
+- Compute the absolute difference on the figures, Data minus Finance.
+- Compute the relative difference on the figures, (Data minus Finance)/Finance %. The result can be seen in the following screenshot
+
+
+
+**Early conclusions:**
+
+- In May 2024, DWH is not reporting any revenue from Guesty, thus the impact on Guesty value. This also impacts APIs revenue, that only considers e-deposit revenue in May. This also impacts Total Revenue.
+- Total Revenue figures are generally higher in DWH side if compared to Finance side, by around ~+5% in 2024.
+ - The main reason is because Guest Revenue figures are higher than what is reported by Finance, by around ~+15% in 2024. It also seems that the longer in time we go, the greater this discrepancy is.
+ - A first hypothesis would be the back-fill application of xe.com currency rates, that is automatic for all DWH reporting. This would only impact Guest Revenue, since it’s the only source that is reading from the backend. The rest of metrics would not be impacted by this currency rate. However, the discrepancy is so huge that likely it cannot be the only source of impact.
+- In November, December 2023 and January 2024, there’s huge discrepancy in the Verification fees reported. However, these represent a small percentage over the Operator Revenue and therefore Total Revenue
+- For the rest of metrics, it looks generally ok, with little to no impact. It seems very sporadic so if we need to check, it will be done with the least priority.
+
+The next sections focus specifically on the first 3 points that are considered as most critical.
+
+## May 2024 - Guesty Revenue
+
+After a short check in the raw data from Xero, it’s clear that there was only one invoice made for Guesty on May 2024 but it has been deleted, without any value assigned into it. The actual May invoice seems to have been done at the end of June.
+
+
+
+After reporting this subject to Jamie, he acknowledges that what we see in the raw data - and therefore our reporting - is correct. There was some issues on having the reporting on time, thus the Finance number is an estimate for board pack. The real number coincides with what we are going to report on June’s Guesty Revenue from DWH side. Lastly, the DELETED status is probably an error according to Jamie.
+
+This difference is considered as controlled.
+
+## Guest Revenue figures are considerably higher in DWH vs. what we see on Finance
+
+It’s worth stating again that Guest Revenue it’s the only figure that does not come exclusively from Xero. Specifically, for Guest Revenue we’re considering:
+
+- Deposit Fees → Backend
+- Checkin Hero Fees → Backend
+- Guest Waiver Payments → Backend
+- and then we subtract the amount of Waivers paid back to the Host → Xero
+
+It’s worth acknowledging that the line of Damage Waivers Payments is close to no difference since October 2023, so likely the discrepancy comes from how we’re modelising the data coming from the backend or from the backend itself.
+
+We already commented that a partial explanation could be the currency rates integration coming from xe.com, that would have done a complete backfill of revenue figures - thus it could explain a part of the discrepancy. Still, the discrepancy is quite huge for just a day-to-day currency conversion instead of a monthly / hardcoded one.
+
+### DWH 23rd July vs. DWH 22nd July
+
+At this stage, on 23rd July 2024, I conduct a very silly experiment. I’ll re-export the same export that I made yesterday for business KPIs retrieval [Query here!](Data%20quality%20assessment%20DWH%20vs%20Finance%20revenue%20fig%206e3d6b75cdd4463687de899da8aab6fb.md). I will compare this export vs. what I retrieved yesterday, 22nd July 2024, with no additional changes on the query. The main goal is to understand if there’s additional backfill updates on the KPIs that we should be aware of, that potentially could explain these differences. Here’s the result:
+
+
+
+It’s available in the sheet Data 23rd vs. Data 22nd
+
+In this case we’re seeing absolute differences in GBP. The good news is that for all Xero metrics we have no difference at all. The not so good news is that from one day to another, we lost 100 GBP in April and 79 GBP in May… in mid July 2024.
+
+After debugging this, I discovered that this is because of refunds when the booking is cancelled, which makes tons of sense. See the example below for May 2024 case:
+
+- Query here!
+
+ ```sql
+ select
+ p.dwh_extracted_at_utc,
+ p.created_date_utc,
+ p.paid_date_utc,
+ p.amount,
+ p.currency,
+ p.notes,
+ ps.payment_status
+ from
+ staging.stg_core__payment p
+ left join staging.stg_core__payment_status ps
+ on p.id_payment_status = ps.id_payment_status
+ where date_trunc('month', p.paid_date_utc)::date = '2024-05-01'
+ order by p.dwh_extracted_at_utc desc
+ ```
+
+
+
+
+The 79 GBP difference can be explained by the first 3 rows, that were extracted between mid afternoon of the 22nd July and the morning on the 23rd. Doing the currency conversion from USD to GBP and adding it up makes it to 79 GBP.
+
+Whenever refunds happen, the status of the payment is no longer PAID, but REFUNDED. Since in KPIs we consider only those that are PAID, it means this is a normal behaviour. Might be interesting to discuss it further in case we want to keep this behaviour (meaning we accept that Revenue figures will likely decrease over time until the booking is over) or if we want to modify it to have more stable yet probably less qualitative data (meaning we do not care about the refunds, or we consider it separately).
+
+Important observation though: the fact that revenue figures in DWH would decrease (a bit) over time, it further enhances the discrepancy that we’re observing between DWH/Finance. Mainly because each day that we full-refresh the history of KPIs, these figures will decrease, and still we are reporting higher Guest Revenue values. Thus, taking a screenshot on the content of the KPIs on the 1st day of a month for the previous month would have even a higher discrepancy vs. Finance.
+
+**Long story short**: good to know, but not what I was looking for, so let’s continue the investigation.
+
+At this stage the easiest is to debug if there’s a specific issue in the computation of the payments coming from the guest, meaning, dividing the guest revenue between Waiver payments, Deposit Fees and Checkin Hero. I’ll take the opportunity and have it already computed at the same level as we have for other Operators revenue metrics.
+
+### Further debugging revenue splits
+
+On 23rd July afternoon, a new deployment has been made available to retrieve the detailed data of guest source of income. So now I proceed to retrieve the data from the DWH with the additional inputs. It’s a modification from the previously shared query, and can be found below:
+
+- Query here!
+
+ ```sql
+ select
+ date,
+ xero_booking_net_fees_in_gbp,
+ xero_listing_net_fees_in_gbp,
+ xero_verification_net_fees_in_gbp,
+ xero_operator_net_fees_in_gbp,
+ xero_waiver_net_fees_in_gbp,
+ xero_apis_net_fees_in_gbp,
+ xero_e_deposit_net_fees_in_gbp,
+ xero_guesty_net_fees_in_gbp,
+ xero_host_resolution_amount_paid_in_gbp,
+ total_guest_revenue_in_gbp,
+ total_revenue_in_gbp,
+ deposit_fees_in_gbp,
+ waiver_payments_in_gbp,
+ checkin_cover_fees_in_gbp,
+ total_guest_income_in_gbp
+
+ from intermediate.int_mtd_vs_previous_year_metrics imvpym
+ where date between '2023-04-30' and '2024-05-31'
+ ```
+
+
+I mainly added:
+
+- `deposit_fees_in_gbp`
+- `waiver_payments_in_gbp`
+- `checkin_cover_fees_in_gbp`
+- and an aggregation of the 3 previous values into `total_guest_income_in_gbp`.
+
+At this stage, it’s worth to differentiate between the metrics:
+
+- `total_guest_revenue_in_gbp = deposit_fees_in_gbp + waiver_payments_in_gbp + checkin_cover_fees_in_gbp + xero_waiver_net_fees_in_gbp` (note that this last one corresponds to the amount paid to the host and it’s negative by nature)
+- `total_guest_income_in_gbp = deposit_fees_in_gbp + waiver_payments_in_gbp + checkin_cover_fees_in_gbp`
+- `total_guest_payments_in_gbp` = any payment effectuated by guests, without enforcing it should come from deposit fee, waiver or checkin cover.
+
+Thus, the new equivalence table (updated) is as follows:
+
+- Updated equivalence table
+
+
+ | Common Name | DWH | Finance |
+ | --- | --- | --- |
+ | Booking fees | xero_booking_net_fees_in_gbp | Booking fees |
+ | Listing fees | xero_listing_net_fees_in_gbp | Listing fees |
+ | Verification fees | xero_verification_net_fees_in_gbp | Verification fees |
+ | E-deposit fees | xero_e_deposit_net_fees_in_gbp | E-Deposit Fees |
+ | Guesty | xero_guesty_net_fees_in_gbp | Guesty |
+ | Damage waivers payments | xero_waiver_net_fees_in_gbp | Damage waiver payments |
+ | Guest Revenue | total_guest_revenue_in_gbp | Damage Waivers + Deposit Handling + Guest products + Damage waiver payments |
+ | Operator Revenue | xero_oeprator_net_fees_in_gbp | Booking fees + Listing fees + Verification fees |
+ | APIs Revenue | xero_apis_net_fees_in_gbp | E-Deposit Fees + Guesty |
+ | Total Revenue | total_revenue_in_gbp | Damage Waivers + Deposit Handling + Guest products + Damage waiver payments + Booking fees + Listing fees + Verification fees + E-Deposit Fees + Guesty |
+ | Waiver income | waiver_payments_in_gbp | Damage Waivers |
+ | Checkin hero | checkin_cover_fees_in_gbp | Guest Products |
+ | Deposit fees | deposit_fees_in_gbp | Deposit Handling |
+ | Guest Income | total_guest_income_in_gbp | Damage Waivers + Deposit Handling + Guest products |
+
+Note that the only difference between Guest Revenue and Guest Income is that we subtract the waiver amount paid back to hosts in Guest Revenue, while we don’t do it for Guest Income. Afterwards, we do first a comparison of this data vs. the one retrieved this morning.
+
+
+
+This comparison is available in the sheet Data 23rd AFT vs. Data 23rd
+
+So again we have a source of discrepancy on revenue figures coming from Guest Revenue, though these might be potentially linked to the changes made on DWH. At this stage I don’t spend more time on debugging these cases.
+
+Afterwards, we conduct the same comparison strategy as we did yesterday, mainly, these new figures of Data 23rd AFT vs. Finance export.
+
+
+
+It’s not clear why this is impacting all guest revenue sources, even on checkin hero that had only 3 payments on May 2024, and the difference is between 28.43 GBP in all Data reports vs. 26.71 GBP in Finance. A possibility could be that for these metrics we’re comparing figures with vs. without taxes, or that indeed the currency rate being used is clearly different.
+
+### Tax inclusive or tax exclusive?
+
+After doing a quick look around on July 24th, I am not able to determine if Payments table - and generally, any amount coming from Superhog backend (what we call internally in Data as Core) - are tax inclusive or not. A message has been posted to senior devs and product team in case someone knows the answer.
+
+Ben Robinson confirms that Payments table has taxes included, thus being the main reason why Guest-related figures have a discrepancy Data vs. Finance. This means that clearly we have a discrepancy on how we’re measuring monetary amounts between different sources (Xero vs. Backend).
+
+Pending confirmation from Jamie on the source of data used for Guest revenue metrics on Finance side, a communication is sent on #data channel on 24th July 2024, 11 AM, informing the users about the issue and the detailed impacts.
+
+- Message sent here
+
+ 🔥 Important message regarding **revenue & monetary amounts data quality** on Power BI reports 🔥
+
+ After aiming to match revenue figures reported from Data side vs. Finance side, **we have detected a discrepancy being caused by tax inclusiveness / exclusiveness**. In a nutshell, Xero-based figures are being reported without taxes while Superhog backend (SH) figures coming from guest payments are being reported with the associated taxes of each country.
+
+ 🔴 This is generating **incorrect insights** in the areas/metrics that combine both sources, specifically:
+
+ - **Business Overview - Guest Payments report**
+ - Waivers: total waiver amount charged (tax incl.) vs. amount paid back to hosts (tax. excl). The % Paid to Host is also impacted
+ - **Business Overview - Main KPIs**
+ - Total Revenue and derived metrics combine both data from SH (Guest Revenue, coming from guest payments) and Xero (Invoiced Operator Revenue, Invoiced APIs Revenue). Also, keep in mind that by nature the different Revenue sources can be inconsistent between them.
+
+ We highly recommend NOT taking any decision based on the data displayed on the abovementioned areas.
+
+ **🟡 If you're using other reports, data should be consistent within that report/tab**. In a nutshell, keep in mind the following:
+
+ - Business overview: The readme and tabs should specify the source of the data, thus SH = tax incl. and Xero = tax excl. Beyond Waivers, tabs should be consistent.
+ - Accounting reports: These come from Xero, so all should be tax excl.
+ - Check-in Hero reports: These come from SH, thus data is tax incl.
+
+ **We're** **not expecting to provide a solution for this issue anytime soon** as we need Pablo and Suzannah back from holidays to deep-dive into it.
+
+ We'll keep you posted in the following days as we have more news on this subject. Sorry for any inconvenience.
+
+
+From here, we have 2 possibilities:
+
+- All DWH reporting is set to tax inclusive, except on some dedicated areas.
+ - Pros: feasible, since Xero-based data allows to select tax incl. or tax excl.
+ - Cons: it can create inconsistencies in the future that we might not be able to understand because we won’t be able always to compare vs. finance data
+- All DWH reporting is set to tax exclusive, except on some dedicated areas
+ - Pros: consistency will be granted across all data reporting and analysis. At the moment, to the best of my knowledge, no tax incl. use-case is present
+ - Cons: Likely not feasible to do with the data coming from the backend. Maybe Stripe contains this information (I’d guess this is the source of Finance data anyway), but it will require work on Data engineering / tech team side
+
+For the time being, we will live with this issue until Pablo & Suzannah are back from holidays, as I’m not knowledgeable enough on financial subjects.
+
+## Guest Payments and Guest Income showing very similar values
+
+Guest Payments is a metric displayed in the business KPIs dashboard that is defined as follows:
+
+> Total monetary amount of income paid by guests, converted to GBP. All verification payment types are considered (Waiver, Deposit, Fee, Check-in Cover).
+>
+
+It’s worth to reinforce that it’s considering a PAID status. On the other side, for Guest Income computation that was created for the purpose of this investigation is exactly the same definition but we enforce to only consider Waiver, Fee and Check-in Cover - meaning we’re excluding Deposit amounts.
+
+Knowing that the deposits generally have a huge monetary amount linked to it, why is it that there’s a very small difference (or no difference at all) between these 2 metrics?
+
+- Query here!
+
+ ```sql
+ select
+ date_trunc('month',payment_paid_date_utc)::date as first_day_month,
+ verification_payment_type,
+ payment_status,
+ count(1) as total_volume,
+ count(distinct id_payment) as unique_payments,
+ sum(amount_in_gbp) as amount_in_gbp
+ from intermediate.int_core__verification_payments icvp
+ group by 1,2,3
+ order by 1 desc, 2 asc, 3 asc
+ ```
+
+
+
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md:Zone.Identifier b/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment DWH vs Finance revenue fig 6e3d6b75cdd4463687de899da8aab6fb.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md b/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md
new file mode 100644
index 0000000..e1c8e47
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md
@@ -0,0 +1,61 @@
+# Data quality assessment: Guest Journeys with Payments but that are not completed (or not even started)
+
+This is a brief explanation of some edge cases that could incur into data quality problems - or potentially, be linked to a bug.
+
+# Problem
+
+During the initiative of creating business KPIs, we are in need of computing a rate between Guest Journeys with Payment and Guest Journeys Completed. The assumption we had was that these were sequential steps in the guest journey, namely:
+
+1. Guest Journey gets created
+2. Guest Journey starts
+3. Guest Journey is completed
+4. Guest Journey has payment
+
+We know it’s possible some guests do not start the guest journey, or start it but not complete it; but our assumption is that in these cases, these guests should not be able to complete following steps.
+
+What we observe is that we can have guests journeys with a payment but that:
+
+1) have not been completed
+
+2) have not been even started
+
+Initially this was spotted into the DWH logic. However, we have the same problem on the source by executing the function `GetVerificationProgress()`
+
+# Backend Snippet
+
+For further investigation on backend side
+
+```sql
+SELECT
+ vr.Id,
+ b.BookingId,
+ p.PaymentId,
+ ps.Name,
+ dbo.GetVerificationProgress(
+ b.BookingId,
+ vr.VerificationSetId
+) as VrProgress
+FROM
+ dbo.VerificationToPayment vtp
+LEFT JOIN dbo.Verification v
+ ON
+ v.Id = vtp.VerificationId
+LEFT JOIN dbo.VerificationRequest vr
+ ON
+ v.VerificationRequestId = vr.Id
+LEFT JOIN dbo.Payment p
+ ON
+ vtp.PaymentId = p.PaymentId
+LEFT JOIN dbo.PaymentStatus ps
+ ON
+ p.PaymentStatusId = ps.Id
+LEFT JOIN dbo.Booking b
+ ON
+ b.VerificationRequestId = vr.Id
+WHERE dbo.GetVerificationProgress(
+ b.BookingId,
+ vr.VerificationSetId
+) != 'Complete' AND ps.Name = 'Paid'
+```
+
+
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md:Zone.Identifier b/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Guest Journeys with Paymen 5a34141e4f2f4267a9ce290101179610.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md b/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md
new file mode 100644
index 0000000..4d3b625
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md
@@ -0,0 +1,31 @@
+# Data quality assessment: Verification Requests with Payment but without Bookings
+
+This is a brief explanation of some edge cases that could incur into data quality problems - or potentially, be linked to a bug.
+
+# Problem
+
+During the initiative of creating business KPIs for the Guest Squad, we are in need of computing a rate between Guest Journeys with Payment and Guest Journeys Created. The assumption we had was that though there are verification requests without an associated booking (for API requests), these shouldn’t have any payments. However, we have found that these cases indeed exist, though they are relatively rare and represent a very small percentage of the total amount paid (0.3% for payments on 2024).
+
+We need to see if this is a possible error, maybe somehow the booking records got deleted or if there is an explanation where these cases are correct.
+
+# Backend Snippet
+
+For further investigation on DWH side
+
+```sql
+SELECT b.id_booking,
+vp.id_verification_to_payment,
+vp.payment_paid_date_utc,
+vp.amount_in_gbp,
+vp.payment_status,
+vp.id_verification_request
+FROM intermediate.int_core__verification_payments vp
+LEFT JOIN intermediate.int_core__bookings b
+ON b.id_verification_request = vp.id_verification_request
+WHERE vp.payment_paid_date_utc IS NOT NULL
+AND b.id_booking IS NULL
+AND vp.payment_status = 'Paid'
+ORDER BY vp.payment_paid_date_utc DESC
+```
+
+
\ No newline at end of file
diff --git a/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md:Zone.Identifier b/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Data quality assessment Verification Requests with 1350446ff9c980f9b0bdea31eb03bac4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md b/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md
new file mode 100644
index 0000000..8b74e4d
--- /dev/null
+++ b/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md
@@ -0,0 +1,16 @@
+# Dealing with massive Docker virtual disks
+
+## Scope
+
+Situation: you are running out of disk space in your laptop. There are very high chances this is caused by how WSL and Docker manage disk space.
+
+To check if this is truly the issue, take the following steps:
+
+- Install WinDirStat if you don’t have it: https://windirstat.net/
+- Start it up and run a scan. It will show you which folders and files are soaking up disk space in your machine.
+- If you see something along the lines of this screenshot (a single `ext4.vhdx` file eating up double or triple digit Gbs, hiding in some Docker/WSL folder), then it’s probably WSL and Docker that are at fault, and this guide is applicable to you. If not, sorry but you’ll have to look elsewhere. Note that WinDirStat can still be useful to spot where you are losing all your space.
+
+## How to fix
+
+- Follow the instructions here: [https://stackoverflow.com/questions/70946140/docker-desktop-wsl-ext4-vhdx-too-large](https://stackoverflow.com/questions/70946140/docker-desktop-wsl-ext4-vhdx-too-large)
+ - If the original page died, here you can find a Wayback Machine snapshot of it: http://web.archive.org/web/20241009132152/https://stackoverflow.com/questions/70946140/docker-desktop-wsl-ext4-vhdx-too-large
\ No newline at end of file
diff --git a/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md:Zone.Identifier b/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Dealing with massive Docker virtual disks 11a0446ff9c9800cb90df90e04780c48.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md b/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md
new file mode 100644
index 0000000..d73685c
--- /dev/null
+++ b/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md
@@ -0,0 +1,385 @@
+# Exploration - MetricFlow - 2024-08-06
+
+[MetricFlow](https://docs.getdbt.com/docs/build/about-metricflow), which powers the [dbt Semantic Layer](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl), helps you define and manage the logic for your company's metrics. It's an opinionated set of abstractions and helps data consumers retrieve metric datasets from a data platform quickly and efficiently.
+
+This is Notion page summarises the explorations we carried out to determine if MetricFlow could be a good package to integrate within our DWH to create a semantic layer and centralise KPIs definitions, in the context of Business KPIs.
+
+# Table of contents
+
+You can jump directly to Conclusions to check the pros&cons and the recommendation. For examples without setting things up, go to Exploring MetricFlow.
+
+# How to set up your environment
+
+This MetricFlow exploration only requires setting things up in the `data-dwh-dbt-project`.
+
+You will need to have `dbt-core` >= 1.8 to have `dbt-metricflow` installed to work properly. You can install them from this `requirements.txt` file:
+
+[requirements.txt](requirements.txt)
+
+```
+dbt-core~=1.8.4
+dbt-postgres~=1.8.2
+dbt-metricflow~=0.7.1
+```
+
+Once you have the requirements updated and installed, you will need to set up a few new files within DWH. Follow these steps:
+
+1. Create a new folder named `semantic_models`. This should be within models folder, at the same level as the folders staging, intermediate and reporting.
+2. In the `semantic_models` folder, copy-paste the following files:
+
+ [README.MD](README.md)
+
+ [metricflow_time_spine.sql](metricflow_time_spine.sql)
+
+ [sem_accommodations.sql](sem_accommodations.sql)
+
+ [sem_bookings.sql](sem_bookings.sql)
+
+ [sem_accommodations.yaml](sem_accommodations.yaml)
+
+ [sem_bookings.yaml](sem_bookings.yaml)
+
+ [metrics.yaml](metrics.yaml)
+
+3. Once the files are copied, run `dbt-parse` to ensure there’s no error.
+ - Your dwh project should look something like this
+
+ 
+
+
+# Exploring MetricFlow
+
+In the README file you will find similar contents as we have in this Notion page, so feel free to skip it.
+
+### Documentation
+
+Make sure to have these links near you when playing with MetricFlow:
+
+- [Introduction to the dbt Semantic Layer](https://docs.getdbt.com/best-practices/how-we-build-our-metrics/semantic-layer-1-intro). Official documentation, you'll find tons of information here. Keep in mind that we're using dbt-core, so not all functionalities are available. But still, it's a must read.
+- [Query examples](https://docs.getdbt.com/docs/build/metricflow-commands#query-examples). This should provide the main commands to be used when playing around with MetricFlow.
+- [Github Example 1](https://github.com/dbt-labs/jaffle-sl-template/tree/main/models/marts/customer360). It's a nice way to see a use-case of implementations, can help creating new configurations based on what other people do.
+
+### DWH test case
+
+At this stage we only have a dummy implementation of the semantic models. Some arbitrary decisions have been taken, so this can be challenged.
+
+Firstly, you'll notice that these semantic models are in a dedicated `semantic_models` folder. This is to separate it from the rest of the modelisation within DWH.
+
+Secondly, you'll notice that we have a `metricflow_time_spine.sql`. It's a bit weird but apparently it's mandatory for MetricFlow to run. It serves as a master Date table, similarly as we have for `int_dates.sql`. The name of this model needs to be exactly `metricflow_time_spine.sql`, otherwise it won't run.
+
+Thirdly, we have the actual implementation of the semantic models, that would act as data marts. These can be identified by the prefix `sem_`. So far we only have a `sem_bookings.sql`, and the respective `sem_bookings.yaml`. You will see we also have a `sem_accommodations.sql` and `sem_accommodations.yaml`. The model `.sql` are quite straight forward - a copy of the useful fields from the intermediate bookings table. What is interesting is the `.yaml`.
+
+Inside of the yaml file you'll observe the usual schema documentation as we have for other DWH models. What's new though is the following:
+
+- `semantic_models`, for both `sem_accommodation.yaml` and `sem_bookings.yaml`
+- `metrics`, only for `metrics.yaml`. Technically, metrics can be defined inside the `sem_xxx.yaml`, but I found it easier to have them separated since you can combine measures from different sources.
+
+Semantic models allow for the configuration of MetricFlow based on entities, dimensions and measures. This is the main layer of configuration. You can check the configurations of the 2 models here:
+
+- sem_accommodations.yaml
+
+ ```sql
+ version: 2
+
+ models:
+ - name: sem_accommodations
+ description: Accommodations overview data mart, offering key details for each accommodation. One row per accommodation.
+ columns:
+ - name: id_accommodation
+ description: The unique key of the accommodation mart.
+ tests:
+ - not_null
+ - unique
+ - name: id_user_host
+ description: The foreign key relating to the user who owns the accommodation.
+ - name: country_iso_2
+ description: The country iso code consisting of two characters where the accommodation is located.
+ - name: country_name
+ description: The name of the country where the accommodation is located.
+ - name: country_preferred_currency_code
+ description: The currency code that is preferred in the country of the accommodation.
+ - name: is_active
+ description: Boolean stating if an accommodation can receive bookings (true) or not (false).
+ - name: created_date_utc
+ description: The date of when the accommodation is created in our systems.
+ - name: updated_date_utc
+ description: The date of when the accommodation has been last updated.
+
+ semantic_models:
+ - name: accommodations # The name of the semantic model
+ description: |
+ Accommodation fact table. This table is at the accommodation granularity with one row per accommodation.
+ defaults:
+ agg_time_dimension: created_date_utc
+ model: ref('sem_accommodations')
+ entities: # Entities, which usually correspond to keys in the table.
+ - name: id_accommodation
+ type: primary
+ - name: id_user_host
+ type: foreign
+ expr: id_user_host
+
+ dimensions: # Dimensions are either categorical or time. They add additional context to metrics and the typical querying pattern is Metric by Dimension.
+ - name: created_date_utc
+ expr: created_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: updated_date_utc
+ expr: updated_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: country_preferred_currency_code
+ type: categorical
+ expr: country_preferred_currency_code
+ - name: is_active
+ type: categorical
+ expr: is_active
+ - name: country_iso_2
+ type: categorical
+ expr: country_iso_2
+
+ measures: # Measures, which are the aggregations on the columns in the table.
+ - name: accommodation_count
+ description: The total amount of accommodations.
+ expr: 1
+ agg: sum
+ - name: hosts_with_accommodations_count_distinct
+ description: Distinct count of host users with a booking.
+ agg: count_distinct
+ expr: id_user_host
+ - name: countries_with_accommodations_count_distinct
+ agg: count_distinct
+ expr: country_iso_2
+
+ ```
+
+- sem_bookings.yaml
+
+ ```sql
+ version: 2
+
+ models:
+ - name: sem_bookings
+ description: Bookings overview data mart, offering key details for each booking. One row per booking.
+ columns:
+ - name: id_booking
+ description: The unique key of the booking mart.
+ tests:
+ - not_null
+ - unique
+ - name: id_user_guest
+ description: The foreign key relating to the user who has booked an accommodation.
+ - name: id_user_host
+ description: The foreign key relating to the user who owns the accommodation.
+ - name: id_accommodation
+ description: The foreign key relating to the accommodation of this booking.
+ - name: id_verification_request
+ description: The foreign key relating to the guest verification request, if any, of this booking. Can be null.
+ - name: booking_state
+ description: The state of the booking.
+ - name: check_in_date_utc
+ description: The date of when the check in of the booking takes place.
+ - name: check_out_date_utc
+ description: The date of when the check out of the booking takes place.
+ - name: created_date_utc
+ description: The date of when the booking is created in our systems.
+ - name: updated_date_utc
+ description: The date of when the booking has been last updated.
+
+
+ semantic_models:
+ - name: bookings # The name of the semantic model
+ description: |
+ Booking fact table. This table is at the booking granularity with one row per booking.
+ defaults:
+ agg_time_dimension: created_date_utc
+ model: ref('sem_bookings')
+ entities: # Entities, which usually correspond to keys in the table.
+ - name: id_booking
+ type: primary
+ - name: id_user_guest
+ type: foreign
+ expr: id_user_guest
+ - name: id_user_host
+ type: foreign
+ expr: id_user_host
+ - name: id_accommodation
+ type: foreign
+ expr: id_accommodation
+ - name: id_verification_request
+ type: foreign
+ expr: id_verification_request
+
+ dimensions: # Dimensions are either categorical or time. They add additional context to metrics and the typical querying pattern is Metric by Dimension.
+ - name: created_date_utc
+ expr: created_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: check_in_date_utc
+ expr: check_in_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: check_out_date_utc
+ expr: check_out_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: updated_date_utc
+ expr: updated_date_utc
+ type: time
+ type_params:
+ time_granularity: day
+ - name: booking_state
+ type: categorical
+ expr: booking_state
+
+ measures: # Measures, which are the aggregations on the columns in the table.
+ - name: booking_count
+ description: The total amount of bookings.
+ expr: 1
+ agg: sum
+ - name: hosts_with_bookings_count_distinct
+ description: Distinct count of host users with a booking.
+ agg: count_distinct
+ expr: id_user_host
+ - name: guests_with_bookings_count_distinct
+ description: Distinct count of guest users with a booking.
+ agg: count_distinct
+ expr: id_user_guest
+ - name: accommodations_with_bookings_count_distinct
+ description: Distinct count of accommodations with a booking.
+ agg: count_distinct
+ expr: id_accommodation
+ - name: verification_requests_with_bookings_count_distinct
+ agg: count_distinct
+ expr: id_verification_request
+
+ ```
+
+
+Metrics provides a way to read from the previously configured semantic models and can be used to query the data in multiple manners, including by different dimensions, orders, time periods, filters, etc. Here’s the configuration of the metrics file:
+
+- metrics.yaml
+
+ ```sql
+ version: 2
+
+ metrics:
+ - name: total_accommodations
+ description: Count of unique accommodations.
+ label: Total Accommodations
+ type: simple
+ type_params:
+ measure: accommodation_count
+ - name: total_bookings
+ description: Count of unique bookings.
+ label: Total Bookings
+ type: simple
+ type_params:
+ measure: booking_count
+ - name: bookings_growth_mom
+ description: Percentage growth of bookings compared to 1 month ago.
+ type: derived
+ label: Bookings Growth % M/M
+ type_params:
+ expr: (current_bookings - previous_month_bookings) * 100 / previous_month_bookings
+ metrics:
+ - name: total_bookings
+ alias: current_bookings
+ - name: total_bookings
+ offset_window: 1 month
+ alias: previous_month_bookings
+
+ ```
+
+
+Note: when changing the configuration of the yaml file, I recommend running the command:
+
+`dbt parse`
+
+Otherwise I feel like the changes are not applied, even if saving the files.
+
+### Running queries
+
+To run queries we need to use the MetricFlow command. The options for the command can be found by:
+
+`mf --help`
+
+
+
+For running queries you'll need to use mf query command. To access the options you can run:
+
+`mf query --help`
+
+
+
+Let's try to run some queries!
+
+Retrieve the total amount of bookings
+
+`mf query --metrics total_bookings`
+
+
+
+Retrieve the total amount of bookings by booking state
+
+`mf query --metrics total_bookings --group-by id_booking__booking_state`
+
+
+
+Not impressed? Well, imagine you want to...:
+Retrieve the total amount of bookings created, ordered by date, from the period 15th May 2024 to 30th May 2024, and compare the % of growth in the total bookings vs. what was observed in the previous month.
+
+When settling different semantic models, make sure to properly set up the foreign keys. For example, `sem_bookings` have a foreign key on `id_accommodation`, that links with the primary key of the `sem_accommodations` table. This ensures that you can call metrics defined in the `sem_bookings` model split by dimensions that are in the `sem_accommodations` model. For instance, if you want to retrieve the whole history of total bookings per country, in descending total booking order:
+
+`mf query --metrics total_bookings --group-by id_accommodation__country_iso_2 --order -total_bookings`
+
+
+
+Note the usage of the '-' in the order clause to order descending.
+
+If we want to retrieve the query that MetricFlow is using, we can use the argument --explain when running a query. For instance, in the previous query:
+
+`mf query --metrics total_bookings --group-by id_accommodation__country_iso_2 --order -total_bookings --explain`
+
+Results with the following output:
+
+```
+✔ Success 🦄 - query completed after 0.15 seconds
+🔎 SQL (remove --explain to see data or add --show-dataflow-plan to see the generated dataflow plan):
+SELECT
+ accommodations_src_10000.is_active AS id_accommodation__is_active
+ , SUM(subq_2.booking_count) AS total_bookings
+FROM (
+ SELECT
+ id_accommodation
+ , 1 AS booking_count
+ FROM "dwh"."working"."sem_bookings" bookings_src_10000
+) subq_2
+LEFT OUTER JOIN
+ "dwh"."working"."sem_accommodations" accommodations_src_10000
+ON
+ subq_2.id_accommodation = accommodations_src_10000.id_accommodation
+GROUP BY
+ accommodations_src_10000.is_active
+ORDER BY total_bookings DESC
+```
+
+# Conclusions
+
+**Pros**:
+
+- Very flexible KPI combination. Ensuring configurations are well built and models properly normalised allows a crazy flexibility since you can define dimensions wherever you want.
+- Scalability. Code-wise, each time we need a new dimension or metric, all previous dimensions and metrics already exist thus it’s quite easy to add new dimensions without breaking existing models.
+- Clean structure. The fact of forcing the yaml configuration ensures the structures are clear and documented - otherwise metrics won’t work… - which is very nice.
+
+**Cons**:
+
+- Learning curve, relatively new package. I struggled a lot to find information to set it up, run commands, etc. There’s very few information available and part of it it’s outdated because it uses other versions of MetricFlow so… it generates a bit of frustration.
+- Unable to materialise as tables. From what I’ve read, DBT Cloud has integrations and different ways to expose this into a Data Viz tools. On DBT Core, things are much more limited at the moment. I’ve spent some time looking around for materialising tables - there’s no native command - and MAYBE with a python integration could be possible. But it’s quite overkill at this stage.
+
+**Recommendation: not to use MetricFlow for the time being. I’d gladly re-explore it once it has new features, proper documentation and more users developing with it.**
\ No newline at end of file
diff --git a/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md:Zone.Identifier b/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Exploration - MetricFlow - 2024-08-06 f45d91500ad7433d9ff4e094b8a5f40b.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md b/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md
new file mode 100644
index 0000000..f1c7ed6
--- /dev/null
+++ b/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md
@@ -0,0 +1,3 @@
+# Finance Reporting App
+
+[Budget Report](Budget%20Report%201f40446ff9c98062b778d1ad809dab13.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md:Zone.Identifier b/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Finance Reporting App 1f40446ff9c980699121cb0d804a65e6.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md b/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md
new file mode 100644
index 0000000..8e93550
--- /dev/null
+++ b/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md
@@ -0,0 +1,84 @@
+# Finance Workflow Change – Host Resolution Payments Handling
+
+## Summary
+
+The Finance team is introducing a structural change in how **Host Resolution Payments** are recorded and processed in our systems. This affects the **source of truth** for these transactions and requires a coordinated update across our **Data Warehouse (DWH)** and **reporting models**.
+
+---
+
+## What’s Changing?
+
+### **Previous Approach:**
+
+- Host resolution payments were handled as **bank transactions**.
+- Stored in the `xero.bank_transactions` table.
+- Payments categorized using:
+ - `account_code = 323` → *E-Deposit Resolution - Host Payment*
+ - `account_code = 316` → *Resolutions - Host Payment*
+
+### **New Approach:**
+
+- Host resolution payments will now be handled as **credit notes** instead.
+- Stored in the `xero.credit_notes` table.
+- Categorization remains the same:
+ - `account_code = '323'` → *E-Deposit claims*
+ - `account_code = '324'` → Check In Hero - Resolutions (Guest) (new as of 23.04 7pm)
+ - `account_code = '316'` → *All other resolution payments*
+
+---
+
+## Effective Date
+
+> 🔔 This change applies to all resolution claims made on or after
+>
+>
+> **📆 March 31st, 2025**
+>
+
+---
+
+## Implementation Plan
+
+### Step 1: Create Unified Resolution Payments Table
+
+A new table will be introduced in the DWH that consolidates **host resolution payments** from both:
+
+- `bank_transactions` (for transactions **before** March 31st, 2025), and
+- `credit_notes` (for transactions **from** March 31st, 2025 **onward**).
+
+This ensures continuity in analysis and reporting across the transition.
+
+---
+
+### Step 2: Review & Update Downstream Models
+
+We will **review all DWH models that depend on**:
+
+- `xero.credit_notes`
+- `xero.bank_transactions`
+
+…and assess how the change might affect:
+
+- joins
+- any possible aggregations that might be affected
+
+[Credit Notes Downstream Analysis](Credit%20Notes%20Downstream%20Analysis%201de0446ff9c980d9a0c1fd9477d7fcb7.md)
+
+---
+
+### Step 3: Update Reporting
+
+The following reports are expected to require updates:
+
+- **Main KPIs Report**
+- **Host Resolutions Report**
+
+More may follow depending on dependencies.
+
+---
+
+## Key Considerations
+
+- Ensure **no double counting** during the transition period (e.g. if one payment is mistakenly captured in both sources).
+- Validate that the new credit notes data includes **all fields** currently used in `bank_transactions`.
+- Coordinate QA testing with Finance to verify totals match.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md:Zone.Identifier b/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Finance Workflow Change – Host Resolution Payments 1de0446ff9c980ccb021fa75288129b0.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md b/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md
new file mode 100644
index 0000000..73f2336
--- /dev/null
+++ b/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md
@@ -0,0 +1,16 @@
+# Git conflict! Help!
+
+There’s a million reasons you might have some conflict in git and I won’t cover all of them.
+
+Instead, I’ll focus on the reason that causes this 99% of times:
+
+- You are working on `your-branch` outside of `master`
+- `master` has advanced and `your-branch` is not up to date with those changes
+- You need to bring those changes into `your-branch` before completing your PR.
+
+You can easily do this by using `git rebase` and fixing small conflicts on your branch if they appear.
+
+Check this video to understand `git rebase` and how to use it:
+https://www.youtube.com/watch?v=f1wnYdLEpgI
+
+If you still don’t understand jackshit and are confused and scared of touching stuff, message Pablo.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md:Zone.Identifier b/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Git conflict! Help! fbacaa3a3fa7455d9feddcb88299d3d0.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md b/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md
new file mode 100644
index 0000000..ab3076a
--- /dev/null
+++ b/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md
@@ -0,0 +1,13 @@
+# Glad you’re back!
+
+Summary of what happened for our beloved team members while they were off
+
+[2025-03-17 - Glad you’re back, Daddy Pablo](2025-03-17%20-%20Glad%20you%E2%80%99re%20back,%20Daddy%20Pablo%201b40446ff9c980ce837edb9154593919.md)
+
+[2024-10-24 - Glad you’re back, Joaquin](2024-10-24%20-%20Glad%20you%E2%80%99re%20back,%20Joaquin%201270446ff9c9808bb60fd1e759ff421c.md)
+
+[2024-10-03 - Glad you’re back, Ben](2024-10-03%20-%20Glad%20you%E2%80%99re%20back,%20Ben%201130446ff9c98007af11c24731bd2ac7.md)
+
+[2024-08-20 - Glad you’re back, Uri](2024-08-20%20-%20Glad%20you%E2%80%99re%20back,%20Uri%20cc3bb68690e04a5f952d0dd78d5abbef.md)
+
+[2024-07-26 - Glad you’re back, Pablo](2024-07-26%20-%20Glad%20you%E2%80%99re%20back,%20Pablo%20f40e0ea62143420d96b409f8f78e9fd9.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md:Zone.Identifier b/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Glad you’re back! 1130446ff9c98005a326f52608abfd91.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md b/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md
new file mode 100644
index 0000000..78337ec
--- /dev/null
+++ b/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md
@@ -0,0 +1,14 @@
+# Guerrilla engineering before Pablo’s temporarily leaves
+
+## Scopes to cover
+
+- [x] FX rates available in SQL Server
+- Cosmos DB → DWH → Hyperline data flow for invoicing purposes
+- Hyperline → Billing DB → DWH data flow to maintain revenue reporting
+
+## Decisions on 2024-16-12
+
+- Ben R. takes care of raising a Postgres Server which we will call the “Billing DB”, which we use for all data exchange with Hyperline.
+- Data Team will push FX rates data into the billing db with Airbyte.
+- Data Team will push API line times into the billing db with Airbyte.
+- We would like for invoice and credit note data from Hyperline to be fed back into the billing db, but it’s unclear how we need to manage this.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md:Zone.Identifier b/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Guerrilla engineering before Pablo’s temporarily l 15a0446ff9c98068a3d0efdb31680f95.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md b/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md
new file mode 100644
index 0000000..948e194
--- /dev/null
+++ b/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md
@@ -0,0 +1,203 @@
+# Guest Products DWH Refactor Ramblings
+
+*This doc. focuses on DWH and reporting impact. It does not cover the impact on old invoicing process.*
+
+Missing this destroy the world on D-Day:
+
+- Document how to find things out in old model vs new model
+ - e.g
+ - A service has been offered
+ - A service has been purchased
+ - A service has been paid
+- Inventory of downstream dependencies on Guest product related thingies
+- Release and migration steps and plan
+
+Doesn’t destroy the world on D-Day:
+
+- Reflect on new stuff that we can do with the new data model and we couldn’t before
+- Think a bit about Confident Stay specific reporting and KPIs
+
+## Rough plan
+
+1. Document new model with Data lenses on, understand it deeply, understand what activating the feature flag will entail in low-level detail
+2. Identify all the affected parts of DWH models and downstream reporting
+3. Design new DWH and reporting artifacts
+ 1. Note that some models for the new guest product tables are already present. Take into consideration.
+4. Test things out thoroughly
+5. Plan and execute feature flag activation
+
+## Old and new tables
+
+- What are the existing tables that will get deprecated, completely or partially?
+- What are the new tables?
+- Descriptions of the tables
+ - General
+ - Per col. info
+ - Relation with business domain (when does a new record get created/updated/deleted, relation to domain semantics, etc)
+
+## How to find things out
+
+How do you find the following in SQL Server?
+
+Logical and SQL explanations welcome.
+
+| Question | Old implementation | New Implementation |
+| --- | --- | --- |
+| How to find out that a service was offered in a Guest Journey | | - Uri’s understanding: For a given VerificationRequest, any record that appears in VerificationRequestToGuestProduct (using GuestProductId) |
+| How to find out that a service was visually seen by a guest in a Guest Journey | | - Uri’s understanding: Not possible at the moment. The schema shows a ShownToGuest [Future] in VerificationRequestToGuestProduct |
+| How to find out that a service was selected by a guest in a Guest Journey | | - Uri’s understanding: Not possible. My understanding is that funnel would move from Shown to Paid (once Shown is available). Effectively we would not be able to discern “the Guest has selected a GP but not paid it”. I don’t think it’s critical, though. |
+| How to find out that a service was paid by a guest in a Guest Journey | | - Uri’s understanding: Payment table will duplicate as PaymentRef as services paid, including Guest Products and Payment Validation services (Waiver, Deposit). PaymentId will be unique and will be used to link with VerificationToPayment for Deposit/Waiver and with VerificationRequestGuestProductToPayment for Guest Products. |
+| How to find out which account/listings have a specific service active for their Guest Journeys (both currently and also historically) | | - Uri’s understanding: Guest Products do not apply at listing level, but can be disabled at Account level. A Custom configuration is determined by having a HostUserId set in the Configuration table. The Enabled/Disabled status is then available in ConfigurationStatus. |
+| How to find prices for a service at whatever granularity makes sense (Truvi-wide, per account, per listing, etc) | | - Uri’s understanding: at the moment, Guest Products do not allow for custom pricing. Product price per currency is available in ConfigurationPricePlan. |
+| | | |
+
+## DWH status as for 8th May 2025
+
+### New tables linked to Guest Products
+
+Tables available in `sync_guest_product`(note we needed to do a parallel sync to Core because of a Product table that would potentially clash otherwise):
+
+- `Product`
+- `DisplayDetail`
+- `ConfigurationPricePlan`
+- `Configuration`
+- `ConfigurationLevel`
+- `ConfigurationStatus`
+
+Tables available in `sync_core`:
+
+- `VerificationRequestToGuestProduct`
+- `VerificationRequestGuestProductToPayment`
+
+Tables that are NOT available yet, that currently appear in `live`
+
+- `dbo.UserProductBundleToGuestProduct`
+
+### Current dependency mapping sync → intermediate
+
+- Note that `int_core__guest_product_price_plans` also depend on `int_core__guest_products` and on `stg_core__currency`. These are not represented in the table below.
+
+| Sync Schema | Sync Table | Staging Table | Intermediate Table |
+| --- | --- | --- | --- |
+| `sync_guest_product` | `Product` | `stg_core__guest_product` | `int_core__guest_products` |
+| `sync_guest_product` | `DisplayDetail` | `stg_core__guest_product_display_detail` | `int_core__guest_products` |
+| `sync_guest_product` | `ConfigurationPricePlan` | `stg_core__guest_product_configuration_price_plan` | `int_core__guest_product_price_plans` |
+| `sync_guest_product` | `Configuration` | `stg_core__guest_product_configuration` | `int_core__guest_product_price_plans` |
+| `sync_guest_product` | `ConfigurationLevel` | `stg_core__guest_product_configuration` | `int_core__guest_product_price_plans` |
+| `sync_guest_product` | `ConfigurationStatus` | `stg_core__guest_product_configuration_status` | Not Available Yet |
+| `sync_core` | `VerificationRequestToGuestProduct` | `stg_core__verification_request_to_guest_product` | Not Available Yet |
+| `sync_core` | `VerificationRequestGuestProductToPayment` | `stg_core__verification_request_guest_product_to_payment` | Not Available Yet |
+
+### Lineage Graph
+
+- Can be retrieved by --select *+int_core__guest_product_price_plans+ +stg_core__verification_request_to_guest_product+ +stg_core__verification_request_guest_product_to_payment+*
+
+ 
+
+
+## Design draft for DWH
+
+- It looks like the most critical model where changes will need to happen are:
+ - `int_core__verification_payments_v2`
+ - `int_core_check_in_cover_prices`
+ - `stg_core__payment_validation_set_to_currency`
+
+- Long-term view:
+ - Create new models for Guest Products configurations (not payments). The CIH offering at host level should be already available with the new tables and should match with the old - no blockers
+ - Create an agnostic model to handle Guest Products payments in the Old and New way. Specially for CIH modelisation.
+ - Make Verification Payments based on actual Verifications, i.e., Payment Validation services: only Waiver and Deposit.
+ - Create a Guest Product Payments for any Guest Product. This should include CIH in both the old and new way, as well as any new Guest Product in the new modelisation.
+ - Create a Guest Journey Payments that combines both Verification Payments and Guest Product Payments. This abstraction can be used for KPIs, and would read from previous Verification Payments and Guest Product Payments splits.
+- Plan
+ - Refactor of Verification Payments V2:
+ - Create a Guest Product Payments table that contains only CIH-Old way. This should match the contents of Verification Payments V2 when filtering by CIH.
+ - Create a Verification Product Payments table that contains only Waiver + Deposit. This should match the contents of Verification Payments V2 when excluding CIH.
+ - Create an upper layer abstraction called Guest Journey Payments = Guest Product U Verification Product, that reads from the previous 2 tables. Here we should handle the tax logic as well. Contents should be pretty much the same as Verification Payments V2.
+ - Modify first level dependants of Verification Product Payments V2 to read from Guest Journey Payments. If some of the dependants do not need to combine both Guest Products and Verification Payments, modify accordingly with the low-level tables. As of today, this include 10 models:
+ - Lineage Graph
+
+ 
+
+ - Split per impact category
+ - Main KPIs:
+ - int_kpis__metric_daily_guest_payments
+ - int_kpis__metric_daily_guest_journeys_with_payment
+ - Guest KPIs:
+ - int_kpis__metric_daily_check_in_attributed_guest_journeys
+ - AB Test monitoring
+ - int_core__ab_test_monitoring_guest_journey
+ - New Dash Reporting
+ - int_core__booking_service_detail
+ - CIH Reporting
+ - int_core__vr_check_in_cover
+ - Guest Satisfaction
+ - int_core__guest_satisfaction_responses
+ - Truvi Reporting (legacy SH reporting)
+ - int_core__booking_details
+ - int_core__payments
+ - Reporting tables
+ - core__verification_payments_v2
+ - Remove Verification Payments model once there’s no additional dependencies
+ - Propagate new Guest Products modelisation
+ - Refactor CIH offered on Host side → already available in new tables
+ - Rest: agnostic CIH payments, etc
+
+# 2025-05-08 Call with Lawrence
+
+- Can we agree to activate the Feature Flag after June 1st?
+ - Yes
+- Three feature flags
+ - GuestProductsEnabled (for GUestJourney)
+ - GuestProductsEnabled (for HostDash)
+ - ShowGuestProductsForBundles
+- Verification-related Guest paid things (Waivers, Deposits, etc) aka. Verification Products remain completely the same and we don’t need to change anything around it?
+ - Yes.
+ - The only subtlety is that Payments will be shared with Guest Products
+- Will there be any migrations happening at the same time the Feature Flags get activated?
+ - No.
+ - Preparation migrations were already run and are in produciton.
+ - Clean up migrations will be needed in the future to delete stuff, but there is no rush to run them.
+
+# Additional answers from Lawrence
+
+1. In the future, if a guest pays for a Waiver and a CheckInCover, how would Payment table look like?
+ - Payment table will have 2 records, with different IDs: one for Waiver and another for CheckInCover
+ - However, the payment reference will have the same value for both records
+2. Can Guest Products be refunded?
+ 1. Historically we had 4 CheckInCover refunded
+ 2. However it’s not implemented at the moment, but might be potentially in the future
+3. Why there’s no Payment Due Date in Guest Product related tables?
+ 1. We do not need a Payment Due Date, as we take the payment up front and if the payment fails, then they aren't able to complete the Journey.
+ 1. When we implement refund functionality we should attempt to refund straight away, but we should introduce a RefundDueDate in the appropriate table, so that we can handle retries, in case there is an issue with the refund.
+4. On `UserProductBundleToGuestProduct` table:
+ 1. I see that some `UserProductBundle` do not have an equivalent record in `UserProductBundleToGuestProduct`. Does this mean that if the record does NOT exist, then guest products are NOT enabled?
+ - Not all bundles require a Guest to complete a Guest Journey. In these scenarios, I don't think we will be populating the UserProductBundleToGuestProduct.
+ - Below services that require a Guest Journey:
+
+ ```sql
+ screening
+ Basic -> No GuestJourney
+ Screening Plus -> GuestJourney
+ Id Verification -> GuestJourney
+ Sex Offender Check -> GuestJourney
+
+ deposit-management
+ Skip -> GuestJourney If(screening is GuestJourney) Otherwise: No GuestJourney
+ Waiver -> GuestJourney
+ Deposit -> GuestJourney
+ Waiver or Deposit -> GuestJourney
+ Waiver Pro -> GuestJourney If(screening is GuestJourney) Otherwise: No GuestJourney
+ ```
+
+ 2. If a record exists in `UserProductBundleToGuestProduct` and then the enabled/disabled changes, does this create a new record in the table?
+ - The `UserProductBundleToGuestProduct` table was never intended to be changed, as currently bundles should not be changed (not possible from the front end)
+ - However, I do know that developers are going into the tables and changing the data manually. And if this was done in these tables, then you are correct, there would be be no way of keeping track of historical changes (which isn't great). So for now, I would be pushing back on any requests to change this data. The idea is that if they want to change something, they will need to create a new bundle.
+ 3. Is it normal that `UpdatedDate` is null for some records while it's filled for others?
+ - Basically, the `UpdatedDate` is a nullable field, which implies that it doesn't need to be populated.
+ - So when we create a new record, it doesn't have to be populated, but when we update a record, we "should" always remember to populate it.
+ - Our, c# code, always populates this field regardless of create or update
+ However, our manual seeding scripts may or may not populate them, depending on the developer who wrote it.
+
+# Misc
+
+- Figma with Guest squad design: https://www.figma.com/board/X8VoMAhrwsyZkt8N0PT6UW/Guest-Product-Schema-v2--New-Dash-Integration-?node-id=1-12&t=IGqaZMKq8oMxnpqa-0
\ No newline at end of file
diff --git a/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md:Zone.Identifier b/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Guest Products DWH Refactor Ramblings 1ec0446ff9c98055872fc4c29b23e40e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md b/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md
new file mode 100644
index 0000000..41f0a69
--- /dev/null
+++ b/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md
@@ -0,0 +1,105 @@
+# HTVR Invoicing explainer
+
+# Summary
+
+The net position of HTVR between February 2024 and 31st January 2025 is, according to invoicing data, ~303K GBP. The client is stating that the net amount should be ~327K GBP, which is NOT the case.
+
+However, the client can challenge us for a double charged amount of 3,030 USD because of 303 duplicated bookings.
+
+Cleaning the duplicated bookings and re-doing the computation, the net position of HTVR between would be ~306K USD, coming from:
+
+- ~ -124K USD Booking Fees, and
+- ~430K USD Net Waivers Amount due to Host
+
+Re-doing the client hypothesis, the net position would result into ~319K USD, but this is wrong because it cannot be assumed that the compute comes uniquely from Approved Bookings at 30 USD per booking fee since:
+
+- Bookings with status different to Approved are also charged. This accounts for ~ -18K USD
+- Waivers can be charged to Bookings with Status different than Approved. This accounts for ~ 5K USD
+
+# Detail
+
+# Cleaning Bookings data
+
+| | Count |
+| --- | --- |
+| Total Bookings | 12,730 |
+| Duplicated Bookings | 303 |
+| Unique Bookings | 12,427 |
+| Unique Approved Bookings | 10,626 |
+
+We have a discrepancy on 303 bookings that are duplicated. We will remove them to have a clean computation. This are available in the Unique Bookings tab.
+
+# Hypothesis
+
+> Amount due to Host is 30 USD X Unique Approved Bookings = 318,780 USD
+>
+
+# Actual computation (excl. duplicated bookings)
+
+We invoice based on:
+
+- Booking Fees
+ - This applies to ALL bookings, indistinctly of the status (Approved, Cancelled, Not Approved).
+ - Therefore: 12,427 Unique Bookings X 10 USD booking fee = 124,270 USD.
+- Net Waivers Amount due to Host
+ - This takes into account Waivers that have been charged in period minus waivers that have been refunded in period.
+ - Waivers Charged up to 31st Jan 2025 = 11,704
+ - Source: Stripe Export - Paid, filtering by happened_after_january = FALSE
+ - Waivers Refunded up to 31st Jan 2025 = 941
+ - Source: Stripe Export - Refunded, filtering by happened_after_january = FALSE
+ - Net Waivers Count = 11,704 - 941 = 10,763
+ - Net Waivers Amount due to Host = 10,763 X 40 USD = 430,520 USD
+
+Which results into:
+
+- Net Payouts (excl. duplicated bookings) = Net Waivers Amount due to Host - Booking Fees = 430,520 USD - 124,270 USD = 306,250 USD
+
+> Note here that this figure differs from the net amount of that has actually been invoiced, as the actual amount invoiced takes into account 3,030 USD from the 303 duplicated bookings.
+>
+
+# Explaining differences
+
+The hypothesis results into a position of 318,780 USD in favour of the Host.
+
+The actual computation, excluding the duplicated bookings, results into a position of 306,250 USD in favour of the Host.
+
+The Host could mistakenly assume that we owe them 318,780 USD - 306,250 USD = 12,530 USD
+
+## Difference 1: Bookings with status different to Approved are also charged
+
+The amount of unique bookings that don’t have the status approved can be retrieved from:
+
+- Unique Bookings - Unique Approved Bookings = 1,801
+
+Therefore, the Booking Fees from these status will be:
+
+- 10 USD X 1,801 = 18,010 USD
+
+This explains MORE than the actual difference of 12,530 USD, so technically we could mistakenly assume that we have paid more to the Host than what’s needed. Let’s jump into difference 2.
+
+## Difference 2: Waivers are not always being charged in the same period, and are not only linked to Approved Bookings
+
+The count of net waivers up to 31st January 2024 is 10,763 Waivers.
+
+This is more than the actual amount of Approved Unique Booking, that is 10,626.
+
+Thus by simple look at the figures, we can see that we actually received more Waivers than Approved Unique Bookings in the same period. This is because Bookings with other status aside from Approved can actually be charged and not necessarily refunded.
+
+- Waivers attributed to Bookings with a Charge and without a Refund with Status different than Approved = 134
+ - Source: Unique Bookings - filtering by Is Waiver Charged = True, Is Waiver Refunded = True, Status ≠ Approved
+
+Applying a 40 USD amount due to host, this results into:
+
+- Waiver Amount due to Host on Bookings with Status different than Approved = 134 X 40 USD = 5,360 USD
+
+## Consolidation
+
+- Bookings with status different to Approved are also charged = 18,010 USD
+- Waiver Amount due to Host on Bookings with Status different than Approved = 5,360 USD
+- Difference = 12,650 USD
+
+Comparing this figure with the differences in computations from the Hypothesis and the actual computation, which is 12,530 USD, results into:
+
+- 12,650 - 12,530 USD = 120 USD
+
+Technically, we cannot assume all Approved Bookings will have a waiver charge and that being happening on the same period. Also, while generally is true that an Approved Booking will contain a Waiver charged, there’s a few cases of approved bookings without a waiver charged, and even some cases in which the waiver is charged and refunded. Therefore, this can explain the differences.
\ No newline at end of file
diff --git a/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md:Zone.Identifier b/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/HTVR Invoicing explainer 1c10446ff9c9801ca39ad230f8931139.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md b/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md
new file mode 100644
index 0000000..1283400
--- /dev/null
+++ b/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md
@@ -0,0 +1,152 @@
+# How to SQL rollup
+
+# Introduction
+
+Do you need to compute metric aggregations at different granularities and you’re tired to create tons of code just to get a Global category?
+
+Introducing SQL rollup!
+
+Instead of doing this:
+
+```sql
+with booking_count_per_state as (
+select
+ booking_state,
+ count(1) as booking_count
+from
+ intermediate.int_core__bookings
+group by
+ 1
+),
+global_bookings as (
+select
+ 'Global' as booking_state,
+ sum(booking_count) as booking_count
+from
+ booking_count_per_state
+group by
+ 1
+)
+select *
+from global_bookings
+union all
+select *
+from booking_count_per_state
+```
+
+To get:
+
+
+
+You can use `ROLLUP` to simplify the code:
+
+```sql
+select
+ coalesce(booking_state,'Global') as booking_state,
+ count(1) as booking_count
+from
+ intermediate.int_core__bookings
+group by
+ rollup(booking_state) -- NOT EQUIVALENT to group by rollup(1)!!!
+```
+
+and get the same results (note we didn’t order by any field):
+
+
+
+Be aware that the use of numeric wildcards such as `group by rollup(1)` **is not advised**. Please read the next section to understand more.
+
+# Step-by-step guide (to avoid common mistakes)
+
+By default, when rolling up, SQL will create a category with the aggregated value with a null name. If we follow the example with Bookings, we would have:
+
+```sql
+select
+ booking_state,
+ count(1) as booking_count
+from
+ intermediate.int_core__bookings
+group by
+ rollup(1) -- equivalent to group by rollup(booking_state)
+```
+
+
+
+That’s why in this case we used `COALESCE` to specify that the resulting value should be called Global, since it counts all Bookings.
+
+However, **we should be very, very conscious of null values**. Let’s go for another example, in this case, on guest payments. Let’s count the total amount of payments per verification payment type. To start with, we can do:
+
+```sql
+-- SIMPLE QUERY
+select
+ verification_payment_type,
+ count(1) as total_payments
+from
+ intermediate.int_core__verification_payments icvp
+group by
+ 1 -- equivalent to group by verification_payment_type
+order by
+ total_payments desc
+```
+
+
+
+And we see we have 2k rows that do not have any verification payment type set. Since we know that rollup will create another category with NULL, first things first, we should tackle these verification payments that do not have a type set. Intuitively, we can name this category as `Unknown`:
+
+```sql
+-- QUERY SETTING UNKNOWN TO NULL VALUES
+select
+ coalesce(verification_payment_type, 'Unknown') as verification_payment_type,
+ count(1) as total_payments
+from
+ intermediate.int_core__verification_payments icvp
+group by
+ 1 -- equivalent to group by coalesce(verification_payment_type, 'Unknown')
+```
+
+
+
+Now, if adding the rollup, we should get a new Null category with the total amount of payments:
+
+```sql
+-- QUERY SETTING UNKNOWN TO NULL VALUES WITH ROLLUP (1)
+select
+ coalesce(verification_payment_type, 'Unknown') as verification_payment_type,
+ count(1) as total_payments
+from
+ intermediate.int_core__verification_payments icvp
+group by
+ rollup(1) -- equivalent to group by rollup(coalesce(verification_payment_type, 'Unknown'))
+```
+
+
+
+Note that we’re using `group by rollup(1)`. This is equivalent of doing `group by rollup(coalesce(verification_payment_type, 'Unknown'))`
+
+Cool but… how can we now re-name the resulting Null value to Global?
+
+- We need to add another coalesce in the select clause,
+- But we need to keep the previous aggregation `group by rollup(coalesce(verification_payment_type, 'Unknown'))`
+
+In other words:
+
+```sql
+-- QUERY SETTING UNKNOWN TO NULL VALUES WITH WELL CREATED ROLLUP
+select
+ coalesce(coalesce(verification_payment_type, 'Unknown'),'Global') as verification_payment_type,
+ count(1) as total_payments
+from
+ intermediate.int_core__verification_payments icvp
+group by
+ rollup(coalesce(verification_payment_type, 'Unknown')) -- NOT EQUIVALENT to group by rollup(1)!!!
+```
+
+
+
+Et voilà!
+
+The key aspect here is understanding that the field that we’re showing in the `select` clause is different from the field we’re using to conduct the `group by rollup`. Therefore, it’s extremely recommended to explicitly specify the `group by rollup` condition instead of using the numeric wildcards.
+
+# Additional Links
+
+https://neon.tech/postgresql/postgresql-tutorial/postgresql-rollup
\ No newline at end of file
diff --git a/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md:Zone.Identifier b/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/How to SQL rollup 1600446ff9c9806c9fdac2ff48e48bd3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md b/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md
new file mode 100644
index 0000000..df583eb
--- /dev/null
+++ b/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md
@@ -0,0 +1,160 @@
+# How to publish a Power BI report
+
+**Table of contents:**
+
+# Publishing a report to be reviewed
+
+You have created a Power BI report with amazing insights and you want to make it available to your teammates for them to review your incredible masterpiece. Here’s the steps that you need to follow:
+
+- Save the current report that you’re working on in local. **Ensure that the report format is `.pbip`, not .pbix**
+- Click on `Publish`:
+
+
+
+- Publish it to the destination `Staging`:
+
+
+
+- While the report is being published, you will see the following window:
+
+
+
+- If the report already exists in the destination, click `Replace`:
+
+
+
+- When it’s finished, you will get the following success message:
+
+
+
+- Click on `Open “name_of_your_report” in Power BI` to access the Staging report and ensure that the report is correctly loading:
+
+
+
+- If it’s not loading correctly it might be because the connection with DWH it’s not set up. You’d probably have seen a step like this:
+
+
+
+- If it’s the case, click on `Open dataset settings` or locate your Power BI report within the `Staging Workspace`:
+
+
+
+- Click the 3 dots `…` and `Settings`:
+
+
+
+- Go to `Semantic models` → `Gateway and cloud connections`:
+
+
+
+- Configure the gateway connection to `data-gateway-prd`. You just need to set the `Maps to:` to `dwh-prd`
+
+
+
+- Click on `Apply` and that’s it! If encountering any issue, contact the Data Engineer of your choosing 😀
+- Lastly, commit your changes in your local branch and push it to origin. Create a Pull Request (PR) for other team members to review. Do not forget to include the link to the staging dashboard and few explanations on the changes / new additions to ease the review process.
+
+**Small tip:** you can also follow a similar procedure to publish it in your own personal workspace. It might be useful if you want to directly share it as a temporary measure to gather user feedback from someone outside of the Data Team. Keep in mind that this will not be the location of the report once it’s fully published, though!
+
+# Creating a new Power BI workspace
+
+Cool, your teammates have reviewed and approved your amazingly cool report. You want to make it available for other users outside of the Data Team to share the impressive insights and visualisations you created. But, oh-no, there’s no Power BI Workspace for your new report… So let’s just create one:
+
+- In the `Power BI Home`, click on `Workspaces` → `+ New workspace`
+
+
+
+- Fill in the information of Name, Description. Do not add any Domain. Add a very cool looking Workspace Image. No need to modify anything under the Advanced tab. Once settled, click on Apply:
+
+
+
+- Check that the new workspace has been created correctly. It should appear empty the first time you create it:
+
+
+
+- Give individual access to your Data Team colleagues as `Admin` in the newly created workspace. You can do so by clicking on `Manage access` → `+ Add people or groups`
+
+
+
+- Once all members have been selected with the correct role, click `Add`:
+
+
+
+- That’s it! Well…
+ - … but probably you’d like to add here your new report. You can reproduce the steps to publish listed [**before**](How%20to%20publish%20a%20Power%20BI%20report%2032dfe47f1d894205a3b19d994045db7f.md), just changing the destination from Staging to the new Workspace.
+ - … and also you’d like to create a new Power BI App with your report, [explained here](How%20to%20publish%20a%20Power%20BI%20report%2032dfe47f1d894205a3b19d994045db7f.md)
+ - … and also definitely you’d like to grant access to users into your Power BI App, [explained here](How%20to%20publish%20a%20Power%20BI%20report%2032dfe47f1d894205a3b19d994045db7f.md)
+
+# Creating or updating a Power BI App
+
+Power BI Applications allow to gather different reports within the same workspace to be displayed as a standalone report for users. We also have the convention in the Data Team to grant user access to the Power BI Apps, instead of standalone reports or workspaces. Follow this steps to create or update an existing Power BI App:
+
+- In the `Power BI Home`, locate and open the `Workspace` you want to create the application in. Once you’re in the `Workspace` view, click on `Create app` or `Update app`:
+
+
+
+- Fill in or review the configuration within the Setup tab. If you’re creating a new application, don’t forget to add a very-cool-looking logo! Once finished, click on `Next: Add content`:
+
+
+
+- If you’re updating an existing App, the already displayed reports will show up here. In this case, ensure that the new report you want to integrate within the app is properly configured. If you’re creating it for the first time, you will need to click on `Add content`:
+
+
+
+- Select the reports that should appear in the app and click `Add`.
+
+
+
+- Check that the content is now filled with your newly added report. You can click on the little drop up/down arrow to see the tabs of your report. Once it’s looking good, click on `Next: Add audience`:
+
+
+
+- In the Audience tab, ensure that the little eye icon is set as visible for each report that should be available within an App. Here you can also add or edit the audience. By default the workspace users (i.e., the Data Team) will have access to it, but you also probably want to share it with some users outside the Data Team. In this case, you will need to create a new Power BI App group dedicated to your application - you can always do it later if you prefer. Once everything is ready, click on `Publish app`:
+
+
+
+- … and Publish again…
+
+
+
+- … and click on `Go to app` to check that the new Power BI App is working properly.
+
+
+
+**Small note**: when updating an app, you will always have the choice to skip certain tabs on the `Setup`, `Content` and `Audience`, and clicking on the bottom right corner will always show `Update app`. In this case, do the relevant changes and when satisfied, update the application.
+
+# Granting access to users in a Power BI App
+
+To ensure that your marvellous reporting is being used across the company, first users will need to have access to it. Crazy, right?
+
+At this stage, the user access is granted at Power BI App level - this is, not at Report nor Workspace level.
+
+In order to do so, we need to have a dedicated Azure group for users that should access our app. Mainly you just need to contact Ben Robinson and ask him to create a new group for your new application, or provide him with the name of the existing group. Please, follow the convention **`PowerbiNameOfYourApp`**.
+
+You will also need to provide him with the users that should be added - or deleted - within this group. Don’t forget to ensure that the Data Team users are included in the group as well!
+
+**Little tip**: we strongly advice to add the Data Team as users of the Power BI group, even though it’s technically not necessary because Data Team has access to the different workspaces. Why? Because this way we can know which users have access to each Group. It can be accessed in your Microsoft Account settings under My Groups, Groups I am in and filtering by PowerBI:
+
+
+
+Select your desired group and click and navigate to `Members` to see who has access to the Power BI App. For example, for `PowerbiCheckInHero` (Check In Hero Power BI App):
+
+
+
+# Add the report details to exposure.yaml in dbt project
+
+The `exposures.yaml` file is used to define **exposures** within your the dbt project. Exposures are a way to document and describe how certain dbt models, metrics, or analyses are used in external systems or by specific users.
+
+
+
+Here you can include some small details and description of the report but most importantly the **models which it depends on and the owner**.
+
+# Add the report to our [Data Products](https://www.notion.so/Data-Products-5030f44a0f764adebb1443ea0681f68a?pvs=21) inside Data Catalogue
+
+Here we have a catalogue of all our current (when updated) reports
+
+
+
+Here anyone interested can have access to some more detailed information about the report both general and technical details.
+
+This is a very good place to explain anything that might not be 100% clear inside the report and we don’t add because of space or aesthetics concerns
\ No newline at end of file
diff --git a/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md:Zone.Identifier b/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/How to publish a Power BI report 32dfe47f1d894205a3b19d994045db7f.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md b/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md
new file mode 100644
index 0000000..a065097
--- /dev/null
+++ b/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md
@@ -0,0 +1,20 @@
+# How to set up WSL and Docker Desktop
+
+# WSL
+
+1. Go to windows store
+2. Look for Ubuntu
+3. Pick Ubuntu 24.04 (or whatever version is reasonable at the time you are reading this) and install it
+4. Enable the WSL
+ 1. Control panel
+ 2. Go to Turn Windows Features On or Off
+ 3. Activate Windows Subsystem for Linux
+ 4. Reboot
+5. You might need to install this: [https://learn.microsoft.com/en-us/windows/wsl/install-manual#step-4---download-the-linux-kernel-update-package](https://learn.microsoft.com/en-us/windows/wsl/install-manual#step-4---download-the-linux-kernel-update-package)
+
+# Docker Desktop
+
+1. Download Docker Desktop
+2. Install it
+3. Settings > Resources > Engine → Enable WSL2 integration with Ubuntu
+4. Try to run on your Ubuntu terminal: `docker run hello-world`
\ No newline at end of file
diff --git a/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md:Zone.Identifier b/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/How to set up WSL and Docker Desktop 4771651ae49a455dac98d7071abcd66d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md b/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md
new file mode 100644
index 0000000..31ce825
--- /dev/null
+++ b/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md
@@ -0,0 +1,35 @@
+# How-tos and tips
+
+[How to set up WSL and Docker Desktop](How%20to%20set%20up%20WSL%20and%20Docker%20Desktop%204771651ae49a455dac98d7071abcd66d.md)
+
+[Set up SSH keys](Set%20up%20SSH%20keys%206b05d5e432164d30b6546bb8bb4ba524.md)
+
+[Little Git SSH cloning trick](Little%20Git%20SSH%20cloning%20trick%203d33758de34742b9ac180fd9c7b5e6b3.md)
+
+[Can’t backup single tables from DWH in DBeaver](Can%E2%80%99t%20backup%20single%20tables%20from%20DWH%20in%20DBeaver%20df6fc66189db415faa9715376832e5ba.md)
+
+[Git conflict! Help!](Git%20conflict!%20Help!%20fbacaa3a3fa7455d9feddcb88299d3d0.md)
+
+[How to publish a Power BI report](How%20to%20publish%20a%20Power%20BI%20report%2032dfe47f1d894205a3b19d994045db7f.md)
+
+[Dealing with massive Docker virtual disks](Dealing%20with%20massive%20Docker%20virtual%20disks%2011a0446ff9c9800cb90df90e04780c48.md)
+
+[PBI: Switch table from Import to DirectQuery](PBI%20Switch%20table%20from%20Import%20to%20DirectQuery%201210446ff9c98027b620f441f083a588.md)
+
+[VPN Set up](VPN%20Set%20up%2001affb09a9f648fbad89b74444f920ca.md)
+
+[DBeaver set up](DBeaver%20set%20up%2012e0446ff9c980de9ac2dc3bb0e9b45d.md)
+
+[Connecting to the DWH](Connecting%20to%20the%20DWH%20b7872e2027d041ffac1363b9c2615971.md)
+
+[Connecting to Core](Connecting%20to%20Core%206ecf68bb25bc489ea8f38ac971e1a2c1.md)
+
+[Add a new device to the Data VPN](Add%20a%20new%20device%20to%20the%20Data%20VPN%201350446ff9c9800abb08ec761bf8ad7f.md)
+
+[Busy man’s guide to optimizing dbt models performance](Busy%20man%E2%80%99s%20guide%20to%20optimizing%20dbt%20models%20performa%20b0540bf8fa0a4ca5a6220b9d8132800d.md)
+
+[Careful with the DB: How to work in SQL Server without giving Pablo a stroke](Careful%20with%20the%20DB%20How%20to%20work%20in%20SQL%20Server%20with%20405c497b76c74bb29dcc790bc59928fd.md)
+
+[How to SQL rollup](How%20to%20SQL%20rollup%201600446ff9c9806c9fdac2ff48e48bd3.md)
+
+[New Dash - Staging](New%20Dash%20-%20Staging%201eb0446ff9c9803dae5bfd03958a76ad.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md:Zone.Identifier b/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/How-tos and tips 6fa0131e44854e7aadfab0f837de9276.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md b/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md
new file mode 100644
index 0000000..d6b0e84
--- /dev/null
+++ b/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md
@@ -0,0 +1,74 @@
+# HubSpot Data Integration
+
+# Tables integrated:
+
+- deals: All information about user deals **(We need to deal with timestamps figuring out which id represents which stage on the lifecycle)**
+- id:
+- archived: irrelevant
+- contacts:
+- companies:
+- createdAt:
+- updatedAt:
+- line_items: unused for now, but will be used to show what services they originally signed up for
+- properties:
+ - hubspot_owner_id: owner in Superhog responsible for one or multiple deal_ids, the one that brought the account in
+ - dedicated_am: account manager’s name in Superhog
+ - createdate: date that the account was created in HubSpot
+ - contract_signed_date: date in which the contract was signed
+ - live_date: date in which the account went live
+ - cancellation_date: date in which an account that was being used was cancelled
+ - dealname: name of the deal
+ - sales_stage: stage on the lifecycle of the sales team
+ - dealstage: id stage related to the lifecycle of sales
+ - deal_source: general source of where did the deal/opportunity came from
+ - lead_source: Inbound/Outbound
+ - amount_of_properties: amount of properties the owner told us they manage
+ - demo_scheduled: boolean
+ - sales_demo_scheduled: date at which the demo is scheduled for
+ - demo_completed: boolean
+ - meeting_status__hubspot_: detail of what happen during demo
+ - cancellation_category: categorization as to why they cancelled the account
+ - cancellation_details: free text explaining why they cancelled the account
+ - last_nps_survey_date: date when the response of the form is sent
+ - last_nps_survey_rating: rating if the survey form
+ - last_nps_survey_comment: free text of the survey response
+ - customer_nps_sentiment: 9-10: promoter - 6-8: passive - <6: detractor
+ - notes_last_contacted: date and time of last contact between account manager and the account owner
+ - num_contacted_notes: number of times the account manager has contacted the owner
+ - free_trial_start_date: date at which the free trial for the dashboard began
+ - free_trial_end_date: date at which the free trial for the dashboard expired
+ - date_meeting: date for when the onboarding is booked for
+ - onboarding_stage: stage on where the onboarding is currently at, when completed is set to ‘no value’
+ - onboarding_owner: responsible for onboarding, could be the same account manager but not always
+ - onboarding_call_completed: boolean for completed onboarding
+ - onboarding_owner__cloned_: owner of onboarding that could replace onboarding_owner
+ - meeting_scheduled: boolean for scheduled onboarding
+- contacts: Contact information for user deals
+- form_submissions:
+- forms:
+- tickets **(All host tickets go to host services or account managers and guest tickets go to guest services)**:
+ - id: unique id for each ticket
+ - hs_object_id: should be the same as the one on id
+ - hs_ticket_category: categorization of the ticket set by the account manager based on the conent on the ticket, manual process
+ - createdate: creation date
+ - closed_date: date at which the ticket was closed
+ - hs_num_times_contacted: amount of times there was any kind of contact between the account manager and the owner
+ - department_involved: if there are any other or others departments involved into solving the ticket
+ - csat_account_management_sentiment: csat raitng for account manager; happy - neutral - unhappy (all these surveys are send to the owner once the ticket is closed)
+ - csat_account_management_response: free text for the resolution of the ticket
+ - csat_host_services_sentiment: csat raitng for host; happy - neutral - unhappy
+ - csat_host_services_response: free text for the resolution of the ticket
+ - hs_pipeline: shows the apartment that is currently responsible for handling the ticket
+ - first_agent_reply_date: date of when the first reply was sent from our side
+ - hs_lastactivitydate: date of last occurrence regarding the ticket
+ - hs_lastcontacted: date of last contact with the ticket owner
+ - last_reply_date: date of last reply from the customer
+ - hubspot_owner_id: hubspot id of person on superhog responsible of the ticket
+ - guest_ticket_category: category selected by person dealing with the ticket (Only for guest related tickets)
+ - content: free text describing the ticket from the customer
+ - source_type: source of where the ticket comes from, email, phone call, etc…
+- engagements:
+ - id_engagements:
+ -
+
+Include engagement tables so we can have more details into the contact/relationship between account managers and owners
\ No newline at end of file
diff --git a/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md:Zone.Identifier b/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/HubSpot Data Integration 1120446ff9c980439236e387507aa476.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md b/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md
new file mode 100644
index 0000000..0a666e3
--- /dev/null
+++ b/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md
@@ -0,0 +1,7 @@
+# Incident Management
+
+[Incident template](Incident%20template%201340446ff9c980508e68d52659bb1a9b.md)
+
+[Incident Reports](Incident%20Reports%209cdecb44c3914d24a0075ca1e8958fbf.md)
+
+This might of inspiration, although you probably want to read it on calm days, not when shit hits the fan: https://github.com/dastergon/postmortem-templates.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md:Zone.Identifier b/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Incident Management 4829884213d744d4884be6c53988e696.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md b/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md
new file mode 100644
index 0000000..86b976b
--- /dev/null
+++ b/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md
@@ -0,0 +1,33 @@
+# Incident Reports
+
+Please, make sure to order this list so that recent incidents sit on top, and older ones go to the bottom.
+
+[20250605-01 - Overrepresentation of Host Resolutions Payments](20250605-01%20-%20Overrepresentation%20of%20Host%20Resolutio%202090446ff9c9804ca74be8bfae70fa64.md)
+
+[20250409-01 - Wrong computation on Revenue Retained metrics](20250409-01%20-%20Wrong%20computation%20on%20Revenue%20Retaine%201d10446ff9c980e0b6d3e52b40879b68.md)
+
+[20250304-01 - Verification Bulk Update](20250304-01%20-%20Verification%20Bulk%20Update%201ad0446ff9c9806faa8bf7673e7ed6a5.md)
+
+[20250124-01 - Booking invoicing incident](20250124-01%20-%20Booking%20invoicing%20incident%201880446ff9c9803fb830f8de24d97ebb.md)
+
+[20250122-01 - Power BI Main Guest KPIs Bug](20250122-01%20-%20Power%20BI%20Main%20Guest%20KPIs%20Bug%201840446ff9c980249355f34c58c4686e.md)
+
+[20241211-01 - DWH scheduled execution has not been launched](20241211-01%20-%20DWH%20scheduled%20execution%20has%20not%20been%201590446ff9c9806086e0ec77336d4c51.md)
+
+[20241119-01 - CheckIn Cover multi-price problem (again)](20241119-01%20-%20CheckIn%20Cover%20multi-price%20problem%20(a%201430446ff9c98088b547dfb0baff6024.md)
+
+[20241104-01 - Booking invoicing incident due to bulk UpdatedDate change](20241104-01%20-%20Booking%20invoicing%20incident%20due%20to%20bu%2082f0fde01b83440e8b2d2bd6839d7c77.md)
+
+[20240919-01 - dbt test failure because wrong configuration in schema file](20240919-01%20-%20dbt%20test%20failure%20because%20wrong%20confi%201060446ff9c98081896ad46ad0b153e7.md)
+
+[20240913-01 - `dbt run` blocked by “not in the graph” error](20240913-01%20-%20dbt%20run%20blocked%20by%20%E2%80%9Cnot%20in%20the%20graph%201030446ff9c980c291f1d57751f443ee.md)
+
+[20240902-01 - Missing payment details in intermediate](20240902-01%20-%20Missing%20payment%20details%20in%20intermedi%20f2067416c0824fc686513937b3fbca78.md)
+
+[20240821-01 - SQL Server connection outage](20240821-01%20-%20SQL%20Server%20connection%20outage%20ba5caf5ba10e438a8393f63838367ad9.md)
+
+[20240718-01 - Xe.com data not retrieved](20240718-01%20-%20Xe%20com%20data%20not%20retrieved%205c283e9aa4834323b38af0bff95477a5.md)
+
+[20240621-01 - Failure of Core full-refresh Airbyte jobs](20240621-01%20-%20Failure%20of%20Core%20full-refresh%20Airbyte%204b308fa051694afe89c8f7147ce5ed27.md)
+
+[20240619-01 - CheckIn Cover multi-price problem](20240619-01%20-%20CheckIn%20Cover%20multi-price%20problem%20fabd174c34324292963ea52bb921203f.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md:Zone.Identifier b/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Incident Reports 9cdecb44c3914d24a0075ca1e8958fbf.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md b/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md
new file mode 100644
index 0000000..6267278
--- /dev/null
+++ b/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md
@@ -0,0 +1,48 @@
+# Incident template
+
+> This is a recommended template to document incidents.
+You might not always need all of it, and you might some times want to add new sections. Use your own judgement.
+We recommended tagging incidents as *YYYYMMDD-inc*. So, if two incidents happen on 2024-06-09, you would tag them `20240609-01` and `20240609-02` .
+Inspired by: https://github.com/dastergon/postmortem-templates
+>
+
+# Title of the incident
+
+Managed by: *Author here*
+
+## Summary
+
+- Components involved: *What parts of our system were affected or played a significant role*
+- Started at: *When did the issue actually start*
+- Detected at: *When did we notice that the incident existed*
+- Mitigated at: *When did we bring things to a stable state without further impact*
+
+*A brief summary of what happened.*
+
+## Impact
+
+*What were the negative consequences of the incident*
+
+## Timeline
+
+*Events as they happened over time. Make sure to write down what’s the Time zone you are using.*
+
+## Root Cause(s)
+
+*An explanation on what root causes started the incident and how they unfolded into the full-fledged incident*
+
+## Resolution and recovery
+
+*What was done to fix the incident and go back to normal*
+
+## **Lessons Learned**
+
+*List of knowledge acquired. Typically structured as: What went well, what went badly, where did we get lucky*
+
+## Action Items
+
+*What should be done after the incident to prevent future occurrences of the same issue*
+
+## Appendix
+
+*Miscellanea corner for anything else you might want to include*
\ No newline at end of file
diff --git a/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md:Zone.Identifier b/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Incident template 1340446ff9c980508e68d52659bb1a9b.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md b/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md
new file mode 100644
index 0000000..27367b2
--- /dev/null
+++ b/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md
@@ -0,0 +1,101 @@
+# Invoice Screen & Protect
+
+## Protection Types
+
+There are 4 types of protections:
+
+1. **Basic Protection**
+2. **Damage Waiver**
+3. **Screen and Protect (S&P)**
+4. **Standalone Protection**
+
+### Invoicing Rules
+
+- Partners are invoiced each month for each **approved** or **flagged** verification with a checkout date in that invoicing month.
+- **Damage Waiver** is an exception: it is charged based on the month of creation rather than the checkout date.
+
+### Cancellation and Rejection Fees
+
+- All **cancelled** or **rejected** verifications are charged a cancellation or rejection fee of **0.25 (local currency)**.
+- If a verification is **both cancelled and rejected**, only one fee is charged (whichever occurred first):
+ - Cancellation fee is charged on the **month of cancellation**.
+ - Rejection fee is charged on the **month of creation**.
+
+### Flagged Verifications
+
+- Verifications flagged by Wilbur (`is_flag_protected = false`) will have a `Flagged` status.
+- Flagged verifications are charged the rejection fee of **0.25** on the **month of creation**.
+- Flagged verifications are not protected.
+
+### Special Rules for Damage Waiver
+
+- Damage Waiver is **not refundable**, meaning the fee is charged even if the booking is cancelled. If is rejected it means it was never charged, so it applies the rejection fee only.
+- Damage Waiver is not impacted by `is_flag_protected`. Verifications with `is_flag_protected = false` are still charged.
+
+## Fee Calculation
+
+To calculate the nightly fee for each booking:
+
+1. **Join Protection Types with Price Tables**:
+ - Match using `id_currency` and `protection_amount`.
+2. **Long-Stay vs. Short-Stay Fees**:
+ - For bookings with `number_of_nights > 30`, use the `long_stay_fee` from the prices table.
+ - For shorter stays, use the `short_stay_fee`.
+
+## Discounts
+
+Sales representatives can apply discounts to partners. Discounts are valid only during their active periods (based on start and end dates) and are categorized as follows:
+
+1. **Generic Discount**:
+ - Applies to the whole account, excluding Damage Waiver.
+ - Configured with a start and end date.
+2. **Volume Discount**:
+ - Applies based on the volume of **approved** or **flagged** bookings with a checkout date in that month. It doesn’t apply to Damage Waiver.
+ - Verifications flagged by Wilbur (`is_flag_protected = false`) will have a `Flagged` status but these don’t count towards the Volume Discount.
+3. **Total Fee Calculation:**
+ - First it applies the price increment and then the discount. For example:
+
+ ```sql
+ booking_fee = 100
+ general_discount = 10
+ price_increase = 5
+ charged_fee = (100 * 1.05) * 0.90 = 94.5
+ ```
+
+
+**Important Notes:**
+
+- Damage Waiver is always excluded from discounts.
+- **Only one discount can be applied to an account at a time. Discounts cannot coexist.**
+
+## Price Increase
+
+A **Price Increase** mechanism exists but is not currently in use. It may be implemented in the future.
+
+## Example
+
+| **user** | **is_protected** | **monthly_volume_discount** | **threshold_approved_booking_volume** | **monthly_volume_discount_start_date_utc** | **monthly_volume_discount_end_date_utc** | id_currency |
+| --- | --- | --- | --- | --- | --- | --- |
+| joaquin_travel | TRUE | 10 | 5 | 01/11/2024 | 30/06/2028 | 1 |
+
+| id_verification | protection_type | protection_starting_level | protection_basic_amount | protection_extended_amount | verification_status | checkin_date_utc | checkout_date_utc | number_of_nights | is_cancelled | cancelled_at_utc | cancelled_date_utc | creation_at_utc | creation_date_utc | cosmos_created_date_utc | nightly_fee | booking_fee | cancellation_fee | rejected_fee | discount_percentage | fee_amount | discount_amount | final_fee | invoice_month |
+| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
+| 1 | DAMAGE WAIVER | [NULL] | 5,000 | [NULL] | FLAGGED | 01/01/2025 | 15/01/2025 | 14 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ 30.00 | £ - | £ - | 10 | £ 30.00 | £ 3.00 | £ 27.00 | 1 |
+| 2 | BASIC PROTECTION | [NULL] | 500 | [NULL] | APPROVED | 01/02/2025 | 05/04/2025 | 63 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 1.75 | £ - | £ - | £ - | 0 | £ 110.25 | £ - | £ 110.25 | 4 |
+| 3 | DAMAGE WAIVER | [NULL] | 5,000 | [NULL] | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ 30.00 | £ - | £ - | 10 | £ 30.00 | £ 3.00 | £ 27.00 | 1 |
+| 4 | BASIC PROTECTION | [NULL] | 500 | [NULL] | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 3.75 | £ - | £ - | £ - | 10 | £ 15.00 | £ 1.50 | £ 13.50 | 1 |
+| 5 | DAMAGE WAIVER | [NULL] | 10,000 | [NULL] | APPROVED | 01/01/2025 | 15/03/2025 | 73 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ 40.00 | £ - | £ - | 0 | £ 40.00 | £ - | £ 40.00 | 3 |
+| 6 | STANDALONE PROTECTION | 500 | [NULL] | 5,000,000 | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 13.75 | £ - | £ - | £ - | 10 | £ 55.00 | £ 5.50 | £ 49.50 | 1 |
+| 7 | SCREEN & PROTECT | [NULL] | 500 | 100,000 | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 7.00 | £ - | £ - | £ - | 10 | £ 28.00 | £ 2.80 | £ 25.20 | 1 |
+| 8 | SCREEN & PROTECT | [NULL] | 250 | 5,000,000 | REJECTED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ - | £ - | £ 0.25 | 0 | £ 0.25 | £ - | £ 0.25 | 12 |
+| 9 | SCREEN & PROTECT | [NULL] | 500 | 100,000 | REJECTED | 01/03/2025 | 05/03/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ - | £ - | £ 0.25 | 0 | £ 0.25 | £ - | £ 0.25 | 12 |
+| 10 | SCREEN & PROTECT | [NULL] | 250 | 1,000,000 | APPROVED | 01/01/2025 | 15/03/2025 | 73 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 5.25 | £ - | £ - | £ - | 10 | £ 383.25 | £ 38.33 | £ 344.93 | 3 |
+| 11 | STANDALONE PROTECTION | 250 | [NULL] | 5,000,000 | FLAGGED | 01/01/2025 | 10/01/2025 | 9 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 14.00 | £ - | £ - | £ - | 10 | £ 126.00 | £ 12.60 | £ 113.40 | 1 |
+| 12 | BASIC PROTECTION | [NULL] | 500 | [NULL] | FLAGGED | 01/01/2025 | 11/01/2025 | 10 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 3.75 | £ - | £ - | £ - | 10 | £ 37.50 | £ 3.75 | £ 33.75 | 1 |
+| 13 | SCREEN & PROTECT | [NULL] | 250 | 50,000 | FLAGGED | 01/02/2025 | 16/02/2025 | 15 | TRUE | 17/12/2024 | 17/12/2024 | 17/12/2024 | 01/01/2024 | 03/12/2024 | £ - | £ - | £ 0.25 | £ - | 0 | £ 0.25 | £ - | £ 0.25 | 1 |
+| 14 | STANDALONE PROTECTION | 250 | [NULL] | 5,000,000 | APPROVED | 01/01/2025 | 05/10/2025 | 277 | TRUE | 05/04/2025 | 05/04/2025 | 05/01/2025 | 01/01/2024 | 17/12/2024 | £ - | £ - | £ 0.25 | £ - | 0 | £ 0.25 | £ - | £ 0.25 | 4 |
+| 15 | SCREEN & PROTECT | [NULL] | 500 | 1,000,000 | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 11.00 | £ - | £ - | £ - | 10 | £ 44.00 | £ 4.40 | £ 39.60 | 1 |
+| 16 | BASIC PROTECTION | [NULL] | 500 | [NULL] | APPROVED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 3.75 | £ - | £ - | £ - | 10 | £ 15.00 | £ 1.50 | £ 13.50 | 1 |
+| 17 | SCREEN & PROTECT | [NULL] | 5,000 | 50,000 | FLAGGED | 01/01/2025 | 09/01/2025 | 8 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 6.00 | £ - | £ - | £ - | 10 | £ 48.00 | £ 4.80 | £ 43.20 | 1 |
+| 18 | SCREEN & PROTECT | [NULL] | 5,000 | 50,000 | APPROVED | 01/01/2025 | 25/01/2025 | 24 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ 6.00 | £ - | £ - | £ - | 10 | £ 144.00 | £ 14.40 | £ 129.60 | 1 |
+| 19 | DAMAGE WAIVER | [NULL] | 250 | [NULL] | FLAGGED | 01/01/2025 | 05/01/2025 | 4 | FALSE | [NULL] | [NULL] | 01/01/2024 | 01/01/2024 | 17/12/2024 | £ - | £ 15.00 | £ - | £ - | 0 | £ 15.00 | £ - | £ 15.00 | 1 |
\ No newline at end of file
diff --git a/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md:Zone.Identifier b/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Invoice Screen & Protect 1610446ff9c980f88de6d6293b4fab03.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md b/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md
new file mode 100644
index 0000000..9a70b70
--- /dev/null
+++ b/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md
@@ -0,0 +1,283 @@
+# KPIs Refactor - Let’s go daily - 2024-10-23
+
+# Context & initial thoughts
+
+Uri here. After the discussions with Product teams regarding the needs for Product KPIs, and anticipating future needs, I’m starting this Notion Page to gather thoughts and feedbacks on a potential refactor for KPIs.
+
+Currently, KPIs follow 2 different trends: either these are in a MTD computation for global and a couple of dimensions, or we have them in a monthly basis per Deal. Life was good and easy - despite having 2 flows - until we started computing Churn. Churn Rates (Revenue, Bookings, Listings) are Deal dependant thus need to go into the monthly basis compute. However, we needed to aggregate them at MTD computation and this creates some dependencies between the 2 flows that - to me - show that we can do better. Especially when considering that MTD computation is not historicised - we keep each day of the current month with the aggregated data of that month, but next month, we lose it except for the last day. So add this up with many more specific requirements from Product KPIs and boom! Refactor time <3
+
+SO! We have mostly 2 options:
+
+- We keep Main KPIs as is, and implement each product flow independently.
+ - Advantages: looks easier at first!
+ - Disadvantages: We’ll have some repeated logic ongoing (ex: if I want to compute Created Bookings) and likely some of the Product KPIs needs will need to somehow be reflected in the Main KPIs anyway (ex: Revenue per Service of New Pricing)
+- We put every KPI in a master flow, and from there we built any reporting that needs a specific KPI
+ - Advantages: everything is centralised, so creating/updating a new metric/dimension would likely be easier than changing it N times. Also, much more scalable.
+ - Disadvantages: Complexity will increase, and likely, it requires quite a bit of work. Specially in ensuring that whatever is currently deployed and used by our users keeps working well.
+ - I would challenge the complexity increase. That happens in both cases. I’ll admit this path probably requires more thorough planning and will consume more brain calories, but I don’t think the end result is more complex than the alternative.
+ - Regarding ensuring that things keep working well, I have some ideas on how this can be monitored *very* easily. Won’t explain here, let’s discuss in the right places, but I would encourage you to assume you will have a traffic light that will let you know if you’ve broken something refactoring.
+
+> *Being realistic though, likely there’s going to be some standalone computations coexisting with KPIs computations in different reports, so it won’t be full one-sided. Example: Top Losers (Account Managers report). Mostly uses data from existing metrics, but is enriched with Hubspot data specifically needed for RevOps.*
+>
+
+# Proposal: let’s create a semantic model
+
+> *Maybe Semantic Model it’s not the good name for this 🙂. Happy to be challenged.*
+>
+
+I agree that Semantic Model will lead to confusion due to how it’s used by industry/market tools. Perhaps just coming up with some silly name is good enough. `URI` models? 😏
+
+KPI models I think might be better 🤤
+
+I’d like to go for the second option: Put every KPI in a master flow. And I’d do it by create a semantic model. I played a bit with MetricFlow in the past and despite the tool not being super mature, I got some nice ideas that we could implement without going for a dedicated tool. Just by using some standards.
+
+Ideally, I’d built it on the intermediate layer, in a standalone folder called semantic_model or sem or something like this. Main reason behind it is that anyway we’ll likely integrate information from many sources (core, hubspot, etc) and I’d like to specifically differentiate the model of `int_core__bookings` with `int_sem__bookings` - if it ever exists -, since likely the second will depend on the first. Other reasons could be providing specific access by labelling the models to other users outside Data Team to this eventually more “mature” data model.
+
+## Pre-aggregating data within deepest granularity
+
+Let’s talk about a simple metric, such as Created Bookings. It just computes the count of bookings that have been created in a certain time period - easy-peasy.
+
+The current model of `int_core__mtd_created_bookings_metric` does some crazy computation because 1) directly applies the MTD logic within the model and 2) directly computes the aggregation per dimension, which usually requires additional joins with other tables. This has already shown some limitations (when adding more dimensions we needed to split the `booking_metrics` model into 4; deal KPIs are mostly monthly because couldn’t sustain the complexity of the MTD and Deal granularity).
+
+- A small challenge from my side here. The split of `booking_metrics` into four different models looked positive at the start, but weeks later we found out the performance problems were truly originating in the `VACUUM ANALYZE` issues. In any case, I agree with the overall point that some mtd models are overly complex due to having both the metric and the mtd logic there.
+
+In order to overcome this, I’d create a pre-aggregated model that has the deepest granularity needed which, by definition, is not at `id_booking` level. Continuing with this example of Created Bookings, at the moment, we would just need the following granularity:
+
+- dimensions
+ - `date` (created_date_utc)
+ - `id_deal`
+- metrics
+ - `created_bookings` (count id_booking) → effectively it will be daily booking count for each dimension value
+
+Why? because the current status of KPIs just needs:
+
+- Created Bookings per Deal and Month → We already have Deal here so perfect, and month can be obtained by doing `date_trunc(’month’,date)::date` and doing `sum(created_bookings)`
+- MTD Global Created Bookings → We don’t care about the fact that id_deal exists here, so no need to take it into account for the aggregation. We could just do the sum of created bookings per day, and later on, apply a MTD computation
+- MTD By Billing Country Created Bookings → Billing Country is Deal dependant with the assumption we made. Thus, either we already provide the Billing Country in each model OR we join it later with a much smaller Deal table
+- MTD By # of Listings Created Bookings → Similarly as before, the Listings segmentation depends on Deal, but also on Date.
+
+Ideally, I’d opt for having already all necessary joins handled in the deepest granularity, meaning the model for `int_sem__created_bookings` would already contain the needed dimensions within it to ease up further logic. It would look like:
+
+- dimensions:
+ - `date` (created_date_utc)
+ - `id_deal`
+ - `billing_country`
+ - `listing_segmentation`
+- metrics
+ - `created_bookings` (count id_booking)
+
+Why? mainly because so far the dimensions we have are somehow Deal-dependant. But if we go for Listing Country dimension, for instance, it would create an additional join that could be handled here. Need to flag if a booking is coming from New Dash or Old Dash? cool, let’s flag it here. Is this booking coming from a PMS, and if so, which one? Cool, flag it here.
+
+## Scalability is key
+
+Having this proposed deepest granularity setup - assuming it exists - would makes things much easier for upper layers, cool. But it’s possible we end up with TONS of dimensions that will make full-refreshes very costly. To continue with the example:
+
+- The proposed setup for a table containing `date`, `id_deal`, `created_bookings` would have at this moment 150k rows.
+- The current state of `int_core__mtd_created_bookings_metric` shows 3.8k rows
+- The current state of `int_core__monthly_booking_history_by_deal` has 29k rows
+- I know I’m annoying, but let me just say it once again: let’s just build it and optimize if and when it’s needed. The volumes described here are perfectly manageable. Incrementality can help a lot. [Microbatching](https://docs.getdbt.com/docs/build/incremental-microbatch) even more.
+
+So yeah. Let’s not do full-refreshes, at least every day for what it matters. We could materialise these models as incremental updates so each day we just update the new data we have based on the `updated_at_utc`. This way we could ensure that at least we don’t have massive executions every day - but we need to have these full-refreshes scheduled from time to time to avoid invisible data drift. Here’s where an orchestration engine will be super useful.
+
+We should also consider that adding or modifying an existing dimension or metric would necessarily need a full-refresh, though. But just from that semantic model and it’s upper dependencies.
+
+Afterwards, the rest of upper logic could be full-refreshed every day as we do now. Since data would be at the deepest granularity, at least in the current state, it wouldn’t be necessary to do crazy joins (maybe just those needed within semantic models to create weighted and converted metrics).
+
+I think it could be useful to have metadata on when a specific dimension/metric pair has been 1) created and 2) last updated. This has nothing to do with the creation or update of the booking, but more to know, internally, if data is updating correctly, sufficiently fresh enough, etc. Even add some tests directly here for semantic models.
+
+## Some open thoughts
+
+- **Not all dimensions will make sense for all metrics**. For example, we likely don’t care about knowing the Invoiced Athena Revenue per Listing Segmentation since (at the moment) Listing segmentation is based on Platform users. This effectively means that on upper layers, there might be the need to handle different dimension definitions to apply within certain ranges. Not a big deal I think but just to keep in mind
+- **We might not need all dimensions**. For example, we might want to exclude cancelled bookings from created bookings. Maybe going for adding a Booking State dimension is too overkill while we could just create a dedicated metric of Created Bookings wo Cancellations or similar.
+- **On deepest granularity**. Whatever we chose as the deepest granularity is something that we need to be very cautious about. I’m specifically thinking of going into a maximum granularity by default of Date and Deal. This means, no Hour granularity and also means no Platform User (User Host) granularity. I’d like to be challenged here if needed.
+- **Nulls might be a problem when aggregating**. Not all users have a deal for instance and we know that global created bookings will effectively have more bookings that the sum of created bookings for each deal. I wonder here if we should explicitly cast nulls as UNSET or similar, specially when joining, to avoid errors and ensure data completeness independently of the granularity chosen. However, considering UNSET might affect row number and shares computation, such as churn rates.
+- Some of these details fly over my head because I’m not deep enough into the mental model of the current setup to see this intuitively. I’m happy for you to do whatever feels best if I you feel you can manage. I’m also happy to set time aside and jump into the dirt if you don’t feel in control and truly need another pair of eyes going low level on this.
+
+# Refined proposal
+
+1. We create a new folder at intermediate logic to handle the different extractions and aggregation logic. We name this folder `kpis`.
+2. We have a standard nomenclature in this folder that all models start with `int_kpis`
+3. We have a first layer models, that handles all the necessary joins and pre-aggregate the information at the deepest granularity. Each extraction model has the minimal granularity needed - at this stage, temporality wise, we go for daily. The convention would look like `int_kpis__daily_name_of_the_kpi`.
+ - For instance, we would have a `int_kpis__daily_created_bookings`
+
+ ```sql
+ {{ config(materialized="table", unique_key=["date", "id_deal", "dash_source"]) }}
+ select
+ -- Unique Key --
+ icb.created_date_utc as date,
+ coalesce(icuh.id_deal, 'UNSET') as id_deal,
+ case
+ when icbtpb.id_booking is not null then 'New Dash' else 'Old Dash'
+ end as dash_source,
+ -- Dimensions --
+ coalesce(
+ icd.main_billing_country_iso_3_per_deal, 'UNSET'
+ ) as main_billing_country_iso_3_per_deal,
+ coalesce(
+ icmas.active_accommodations_per_deal_segmentation, 'UNSET'
+ ) as active_accommodations_per_deal_segmentation,
+ -- Metrics --
+ count(distinct icb.id_booking) as created_bookings
+ from {{ ref("int_core__bookings") }} as icb
+ left join
+ {{ ref("int_core__user_host") }} as icuh on icb.id_user_host = icuh.id_user_host
+ left join {{ ref("int_core__deal") }} as icd on icuh.id_deal = icd.id_deal
+ left join
+ {{ ref("int_kpis__daily_accommodation_segmentation") }} as icmas
+ on icuh.id_deal = icmas.id_deal
+ and icb.created_date_utc = icmas.date
+ left join
+ {{ ref("int_core__booking_to_product_bundle") }} as icbtpb
+ on icb.id_booking = icbtpb.id_booking
+ group by 1, 2, 3, 4, 5
+ ```
+
+ - Currently materialised as a table, but easy to adapt to incremental merge.
+ - Regarding UNSET: we ensure that there’s no nulls in the dimensions. This is beneficial because
+ - 1) in future aggregations (monthly, per dimension, etc) will allow us to ensure data is complete for additive metrics - meaning the sum of a metric in every dimension per all possible dimension values will always match the global figure. This can be - and should be - a data test within dbt.
+ - 2) allows easier for joins with other metrics to compute weighted metrics, such as Total Revenue per Created Bookings. It means we could quantify the Total Revenue per Created Bookings of these users that do not have a Deal Id set, for instance
+ - 3) allows to quantify the % of data incompleteness we have in a given dimension: for instance, if the Dimension Billing Country has 10 UNSET created bookings over a total of 100, we know that 10% of the bookings cannot be attributed to a specific Billing Country.
+4. Time aggregations need to depend on these first layer models. For instance, we can have MTD, YTD, Monthly, etc. If it just handles the time aggregation and nothing more, these models would be called `int_kpis__time_aggregation_name_of_the_kpi`.
+ - For instance, we would have `int_kpis__mtd_created_bookings`
+
+ ```sql
+ {{
+ config(
+ materialized="view",
+ unique_key=[
+ "date",
+ "id_deal",
+ "dash_source",
+ "active_accommodations_per_deal_segmentation",
+ ],
+ )
+ }}
+
+ select
+ -- Unique Key --
+ d.date,
+ b.id_deal,
+ b.dash_source,
+ b.active_accommodations_per_deal_segmentation,
+ -- Dimensions --
+ b.main_billing_country_iso_3_per_deal,
+ -- Metrics --
+ sum(b.created_bookings) as created_bookings
+ from {{ ref("int_dates_mtd") }} d
+ left join
+ {{ ref("int_kpis__daily_created_bookings") }} b
+ on date_trunc('month', b.date)::date = d.first_day_month
+ and extract(day from b.date) <= d.day
+ where id_deal is not null
+ group by 1, 2, 3, 4, 5
+ ```
+
+ - Bear in mind that the unique key can change in these kind of aggregations. In this case, we keep the previous 3 dimensions (date, id_deal, dash_source) BUT we are forced to include active_accommodations_per_deal_segmentation. The reason is that one Deal will only have one Listing Segment value in a given Date, but can have more than one over a month. In essence, any dimension that can change over the month needs to appear in the unique key.
+ - **OPEN QUESTION: do we want to store all MTD dates, instead of just last day of the month + any day of the current month?**
+5. We can also aggregate per dimensions. These will be configured in a macro so each model can have shared dimensions (Global, By number of listings, by billing country) that likely will go to Main KPIs or specific dimensions (By Dash Type - new dash, old dash) that likely will be used in specific domains. These models should be called `int_kpis__dim_agg_time_aggregation_name_of_the_kpi`.
+ - For instance, we would have `int_kpis__dim_agg_mtd_created_bookings`
+
+ ```sql
+ {% set dimensions = get_kpi_dimensions_per_model("BOOKINGS") %}
+
+ {{ config(materialized="table", unique_key=["date", "dimension", "dimension_value"]) }}
+
+ {% for dimension in dimensions %}
+ select
+ -- Unique Key --
+ date,
+ {{ dimension.dimension }} as dimension,
+ {{ dimension.dimension_value }} as dimension_value,
+ -- Metrics --
+ sum(created_bookings) as created_bookings
+ from {{ ref("int_kpis__mtd_created_bookings") }}
+ group by 1, 2, 3
+ {% if not loop.last %}
+ union all
+ {% endif %}
+ {% endfor %}
+ ```
+
+ - It can be seen that we retrieve a specific set of dimensions that in this case are dedicated to Bookings. This is configured in the macro as follows:
+
+ ```sql
+ /*
+ The following lines specify for each dimension the field to be used in a
+ standalone macro.
+ Please note that strings should be encoded with " ' your_value_here ' ",
+ while fields from tables should be specified like " your_field_here "
+ */
+ {% macro dim_global() %}
+ {{ return({"dimension": "'global'", "dimension_value": "'global'"}) }}
+ {% endmacro %}
+ {% macro dim_billing_country() %}
+ {{
+ return(
+ {
+ "dimension": "'by_billing_country'",
+ "dimension_value": "main_billing_country_iso_3_per_deal",
+ }
+ )
+ }}
+ {% endmacro %}
+ {% macro dim_number_of_listings() %}
+ {{
+ return(
+ {
+ "dimension": "'by_number_of_listings'",
+ "dimension_value": "active_accommodations_per_deal_segmentation",
+ }
+ )
+ }}
+ {% endmacro %}
+ {% macro dim_deal() %}
+ {{ return({"dimension": "'by_deal'", "dimension_value": "id_deal"}) }}
+ {% endmacro %}
+ {% macro dim_dash() %}
+ {{ return({"dimension": "'by_dash_source'", "dimension_value": "dash_source"}) }}
+ {% endmacro %}
+
+ /*
+ Macro: get_kpi_dimensions_per_model
+
+ Provides a general assignemnt for the Dimensions available for each KPI
+ model. Keep in mind that these assignations need to be previously
+ declared.
+
+ */
+ {% macro get_kpi_dimensions_per_model(entity_name) %}
+
+ {# Base dimensions shared by all models #}
+ {% set base_dimensions = [
+ dim_global(),
+ dim_number_of_listings(),
+ dim_billing_country(),
+ dim_deal(),
+ ] %}
+
+ {# Initialize a list to hold any model-specific dimensions #}
+ {% set additional_dimensions = [] %}
+
+ {# Add entity-specific dimensions #}
+ {% if entity_name == "BOOKINGS" %}
+ {% set additional_dimensions = [dim_dash()] %}
+ {% endif %}
+
+ {# Combine base dimensions with additional dimensions for the specific model #}
+ {% set dimensions = base_dimensions + additional_dimensions %}
+ {{ return(dimensions) }}
+ {% endmacro %}
+
+ ```
+
+6. Be aware that this setup needs to have daily segmentations to work properly, which might be dependant on some lifecycle models. For instance, Deal Metrics and Listings Metrics are depending on the respective lifecycle models. Thus, lifecycle models need to be computed in a Daily basis.
+
+That’s it. For the rest, we will adapt based on the needs as these arise.
+
+**Note**: MTD should have a from and to date to clarify.
+
+**Note**: differentiate namings daily metrics vs. daily
+
+**Note**: migration - all together or transitional → transitional but taking into account that I don’t want to block Joaquin on Guest KPIs. Add deprecation flags
+
+**Note**: MTD and Monthly should be different models. We can aggregate later! Include date_from and date_to
\ No newline at end of file
diff --git a/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md:Zone.Identifier b/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/KPIs Refactor - Let’s go daily - 2024-10-23 1280446ff9c980dc87a3dc7453e95f06.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md b/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md
new file mode 100644
index 0000000..3fda3ab
--- /dev/null
+++ b/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md
@@ -0,0 +1,93 @@
+# KPIs Refactor -2025-04-01
+
+# Current State
+
+Cross models in KPIs is a bit… well, messy.
+
+The goal of the refactor is on: `int_monthly_aggregated_metrics_history_by_deal`, which only depends on KPIs models. This was intended to aggregate ALL metrics that are needed for Main KPIs - Detail by Deal / Deal Comparison tabs.
+
+
+
+Figure 1. Models used to compute int_monthly_aggregated_metrics_history_by_deal. Note how all the dependencies are within KPIs scope.
+
+This model is mostly used for 2 use-cases: Main KPIs and Account Managers reporting. However, there’s actually 5 direct downstream models: 2 in the scope of Account Managers and 3 in the scope of Main KPIs.
+
+
+
+Figure 2. Models that depend on int_monthly_aggregated_metrics_history_by_deal. In Red we have Account Managers related dependants, Growth Score and Margin. In Purple we have Churn Rates models, that are a by deal compute that gets attributed to the dimension-based Main KPIs. Similarly, in Yellow, we have a similar setup for Onboarding MRR. Lastly, in Green, we have the Deal Metrics, almost a “copy” of the model to reporting, to feed the Detail by Deal and Deal Comparison tabs in Main KPIs.
+
+Let’s deep dive:
+
+- Account Managers reporting
+ - Growth Score - `int_monthly_growth_score_by_deal`: populates the Account Managers Overview, mostly to compute the growth score.
+ - Margin - `int_monthly_aggregated_metrics_history_by_deal_by_time_window`: populates both Account Margin and Churn Report.
+- Main KPIs
+ - Churn Rates - `int_monthly_12m_window_contribution_by_deal`: used as a first step to compute Churn Rate metrics.
+ - Onboarding MRR - `int_monthly_onboarding_mrr_per_deal`: used as a first step to compute Onboarding MRR metrics
+ - Deal Metrics - `monthly_aggregated_metrics_history_by_deal`: populates Detail by Deal and Deal Comparison tabs. *This is the only model that requires ALL metrics displayed in Figure 1*.
+
+Now, it’s important to note that:
+
+- Churn Rates + Growth Score only need Created Bookings, Listings Booked in X and Total Revenue metrics
+- Onboarding MRR only needs Total Revenue metrics.
+- Margin only needs Created Bookings, Listings Booked in X, Total Revenue, Total Revenue Contributions (Invoiced Operator Revenue, Guest Revenue, APIs Revenue), Revenue Retained and Contributions (RRPR, Waiver Paid Back to Host, Host Resolutions)
+
+Note that the above is in terms of metrics. There’s different logic in place and these usually rely on Deal attributes, mostly coming from Hubspot. In any case these should be within `int_kpis__dimension_deals` or similar.
+
+## Additional notes
+
+- It’s arbitrary to cut Deal Metrics only at Monthly level (not having performance in month, namely, MTD). Account management could benefit from having more timely data, as raised recently by stakeholders.
+- Detail by Deal and Deal Comparison tabs are… well… a mess. I doubt someone would need this amount of information. If this was actually needed, this could be - potentially - a new dimension, being the dimension-value the `id_deal`. And then we’d just benefit from the existing Main KPIs tabs that allow the deep-dive. We’d just need to test performance!
+
+# Refactor plan
+
+The main issue is that we only computed Revenue (Total, Retained, etc) on two models in cross. Why don’t we actually compute these on KPIs? and this is the exact point!
+
+The following plan allows to refactor step by step by ensuring that if we need to switch priorities we will always keep a working setup (as long as each stage is finished…!)
+
+Technically once Step 1 is done, Step 2 vs. Step 3+4 are interchangeable. I just decided to focus on Stage 2 as more priority because of the potential immediate benefit we would have if priorities shift. In other words, it’s a faster enabler.
+
+## Stage 1: Total Revenue and Revenue Retained in KPIs
+
+In Stage 1 we want to handle the logic of Total Revenue and Revenue Retained in KPIs, and remove the logic of computing these metrics in cross. The rest would just keep existing as is, without any impact on downstream dependencies or reports. This consists in:
+
+- Create a Total Revenue KPI set of models (in intermediate/kpis folder)
+- Create a Revenue Retained KPI set of models (in intermediate/kpis folder)
+- Make `int_monthly_aggregated_metrics_history_by_deal` depend on these 2 newly created sets of models
+- Make `int_mtd_vs_previous_year_metrics` depend on these 2 newly created sets of models
+
+## Stage 2: Decouple Account Managers from By Deal KPIs
+
+In Stage 2 we just want to decouple the source models in cross that depend on `int_monthly_aggregated_metrics_history_by_deal` for Account Management purposes. This consists in:
+
+- Refactor `int_monthly_growth_score_by_deal` so it reads directly from the needed models in KPIs, instead of the current ALL metrics version.
+- Refactor `int_monthly_aggregated_metrics_history_by_deal_by_time_window` so it reads directly from the needed models in KPIs, instead of the current ALL metrics version.
+
+Once Step 2 is finished, we’d effectively removed the dependency of Main KPIs - By Dimension being dependant on By Deal models. In other words, it would no longer depend on `int_monthly_aggregated_metrics_history_by_deal`.
+
+## Stage 3: Churn Rates in KPIs
+
+In Stage 3 we want to compute Churn Rates in KPIs. These are logic intensive models but the only purpose is to compute KPIs for Main KPIs in the by dimension approach (not by deal). This consists in:
+
+- Flag existing Churn Rates models as to be deprecated: `int_monthly_12m_window_contribution_by_deal` + `int_monthly_churn_metrics`
+- Create a Churn Rates KPI set of models (in intermediate/kpis folder)
+- Make `int_mtd_vs_previous_year_metrics` depend on this new set of models
+- Remove deprecated models
+
+## Stage 4: Onboarding MRR in KPIs
+
+In Stage 4 we want to compute Onboarding MRR in KPIs. These are logic intensive models but the only purpose is to compute KPIs for Main KPIs in the by dimension approach (not by deal). This consists in:
+
+- Flag existing Onboarding MRR models as to be deprecated: `int_monthly_onboarding_mrr_per_deal` + `int_mtd_agg_onboarding_mrr_revenue`
+- Create a Onboarding MRR KPI set of models (in intermediate/kpis folder)
+- Make `int_mtd_vs_previous_year_metrics` depend on this new set of models
+- Remove deprecated models
+
+Once Step 4 is finished, we’d effectively removed the dependency of Main KPIs - By Dimension being dependant on By Deal models. In other words, it would no longer depend on `int_monthly_aggregated_metrics_history_by_deal`.
+
+# Potential Next Steps
+
+There’s 2 main stages that need to be accomplished, allowing for potential improvements. Other possibilities might emerge in the meantime so take them as a guideline:
+
+- After Stage 2, we could have the possibility to freely adapt Account Managers dedicated reporting. This could include, for instance, allowing for in-current-month data to speed up freshness. Alternatively, or rather, in addition; we could start thinking on including the projections at account level to develop alerting systems. Should we decide to keep things as they are, the solution would still work.
+- After Stage 4, we could have the possibility to freely adapt Main KPIs - By Deal. Here we could decide to remove these models all together, and see if adding Deal as a dimension could work for in-depth understanding. Should we decide to keep them, the solution would still work.
\ No newline at end of file
diff --git a/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md:Zone.Identifier b/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/KPIs Refactor -2025-04-01 1c70446ff9c9800a8aa2d9706416b38d.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md b/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md
new file mode 100644
index 0000000..42abb21
--- /dev/null
+++ b/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md
@@ -0,0 +1,143 @@
+# Listing & Deal lifecycle - 2024-07-29
+
+→ Link to the lifecycle schema: [Lifecycle states](Listing%20&%20Deal%20lifecycle%20-%202024-07-29%204dc0311b21ca44f8859969e419872ebd.md)
+
+This page aims to summarize the first steps conducted towards enabling a proper definition of the lifecycle of a listing and a deal, understanding deal as the unique identifier of a Host/PM/etc of our B2B clients.
+
+Table of contents:
+
+The following sections focus on the lifecycle of a listing, but exactly the same logic applies to the deal lifecycle.
+
+# Introduction
+
+A listing, or accommodation, is the physical place where the guests accommodate into whenever they have booked it in a given timeframe from a host. Therefore, a listing corresponds to a single host, but can accommodate multiple guests over the time.
+
+The volume of listings that we are working with provides valuable information regarding the scalability and health of our business: more listings, more bookings, more revenue.
+
+In general though the abovementioned hypothesis might not be true since we can have some listings that are not activated by the host (meaning, these cannot accommodate more bookings). However, even if a listing is activated, it doesn’t mean that a listing is being booked - it just means it CAN be booked, but not that IS being booked.
+
+During the exercise of business KPIs definition, this subject has grown in attention since there’s different ways to account for the activity of a listing and, in essence, we’re interested on the one that enables us to somehow understand the potential that our business has. Even if we have some activity in the listings because these have orders, the recency and frequency of how much are they being booked could be quite interesting for our knowledge.
+
+Additionally, we need to take into account that eventually, a listing could churn. This means that, over a certain period of time, we would consider that this listing is inactive because it didn’t have any booking for a certain time. We could compensate those by onboarding new hosts and thus acquiring new listings, and even aiming to reactivate already churned listings. Now we’re starting to see the importance of measuring the lifecycle of a listing!
+
+# Reasoning
+
+In the previous section we have already identified some potential states that could help understanding in which point in the lifecycle a listing currently is on. Before jumping into the different categorisations, it’s important to understand the basis in which the lifecycle will rely on. We can consider these as ‘assumptions’, but rather more broadly as ‘reasoning’ behind the categorisation:
+
+- We will measure the listing activity based on the bookings created in a given timeframe. At this moment, we’re not excluding any booking state - this meaning that a cancelled booking would still be taken into account to measure listing activity.
+- Based on this logic, we can identify at a macro level 3 main states:
+ - A listing is somehow ‘active’, in the sense that has had recently at least 1 booking;
+ - A listing is somehow ‘inactive’, in the sense that has not had any booking recently, or any booking at all in its history;
+ - and finally, a listing can be neither ‘active’ nor ‘inactive’, this meaning mainly that the listing has been created recently but has the potential to allocate for new bookings in the future
+- Based on this logic, it makes sense to separate the path of a ‘new’ listing from those that ‘not new’, since the business strategy in these 2 areas will probably differ.
+- In essence, it can be interesting to identify those natural movements between ‘new’ to ‘active’, ‘active’ to ‘inactive’ and, yes, ‘inactive’ to ‘active’.
+
+# Lifecycle states
+
+Based on the previous reasoning and without aiming to have a fully-detailed lifecycle, the Data team has proposed a first approach that would enable to categorise the listings per certain lifecycle states that could enable a better comprehension of the listings evolution.
+
+This first categorisation, developed during end of Q2 2024, consists of a set of 7 mutually exclusive lifecycle states:
+
+1. **New**: Listings that have been created in the current month, without bookings.
+2. **Never Booked**: Listings that have been created before the current month, without bookings.
+3. **First Time Booked**: Listings that have been booked for the first time in the current month.
+4. **Active**: Listings that have booking activity in the past 12 months (that are not FTB nor reactivated).
+5. **Churning**: Listings that are becoming inactive because of lack of bookings in the past 12 months.
+6. **Inactive**: Listings that have not had a booking for more than 12 months.
+7. **Reactivated**: Listings that have had a booking in the current month that were inactive or churning before. After the 2nd booking during the reactivation month, will be categorised as Active directly.
+
+Below you can see a high-level schema of how the lifecycle would look like:
+
+
+
+Lifecycle stages with the natural transitions (continuous lines) and potential reactivation transitions (dashed lines)
+
+Let’s put ourselves in the feet of a host:
+
+Let’s imagine as a host that I add my first listing in Superhog. During the first days, it will be categorised as a New listing. If during the same month I have a booking created for that listing, then it will automatically transition to First Time Booked. On the contrary, maybe I don’t have any booking on the first month, so my listing will be categorised as Never Booked until this first activation happens.
+
+Once my booking has had its first booking, I’ll be categorised as First Time Booked for that given month. With or without new bookings arriving, it will automatically transition to Active on the following month. As long as I have had a booking created for the past 12 months, the listing will be considered as Active.
+
+In the moment it’s been exactly 12 months without a booking, my listing will move towards a temporary state of Churning for a month. If no new booking is created, my listing will go towards Inactive, and it will stay here until a new booking is created - if this ever happens.
+
+In the scenario of having a booking created while under Churning or Inactive, my Listing will activate the reactivation flow, thus moving towards Reactivated, before being considered as Active again. Here 2 things can happen: either one month has passed since I’ve been in the Reactivated stated or I’ve had more than 1 booking on the reactivation month, thus immediately moving towards Active.
+
+Finally, the potential terminating states are the Inactive and Never Booked. Both of them can be activated or reactivated, but for most of the cases these states will act as a cemetery of listings.
+
+# Activity measurement
+
+At this stage is worth noticing that some of the previous identified states indicate certain booking activity: Active - of course - but also Reactivated and First Time Booked. This is why, independently of a listing being tagged as any of these 3 states, we can go deeper on the recency of the booking to better anticipate listing churn.
+
+We’ve also added 3 flags:
+
+- **Has the listing been booked in 1 month?**: If a listing has had a booking created in the current month
+- **Has the listing been booked in 6 months?**: If a listing has had a booking created in the past 6 months
+- **Has the listing been booked in 12 months?**: If a listing has had a booking created in the past 12 months
+
+Note that if a listing has had a booking created this month, all 3 flags will be true. Similarly, if the last booking created to a listing was 5 months ago, only the flag has_been_booked_in_1_month will be false; while the other 2 will be true. This is specially helpful to further categorise the listings that are in the Active state, since the 3 levels will apply. For the Reactivated and First Time Booked, since these states are framed whenever a booking occurs in the current month, they will always have the 3 flags as true.
+
+This information will help categorising with different time windows the recency of the last booking created, to anticipate potential movements towards the path of inactivity.
+
+# Deal lifecycle
+
+Similarly as the listing lifecycle presented so far, it’s easier to identify that the same stages can apply for a deal. Deal is preferable to host or PM since it’s possible that the same deal is linked to multiple hosts. Thus, deal represents the unique B2B entity that we can consider as ‘our clients’.
+
+Therefore, for the Deal lifecycle we keep the same categorisation as for Listings. It has been developed during end of Q2 2024, consists of a set of 7 mutually exclusive lifecycle states:
+
+1. **New**: Deals that have been created in the current month, without bookings.
+2. **Never Booked**: Deals that have been created before the current month, without bookings.
+3. **First Time Booked**: Deals that have been booked for the first time in the current month.
+4. **Active**: Deals that have booking activity in the past 12 months (that are not FTB nor reactivated).
+5. **Churning**: Deals that are becoming inactive because of lack of bookings in the past 12 months.
+6. **Inactive**: Deals that have not had a booking for more than 12 months.
+7. **Reactivated**: Deals that have had a booking in the current month that were inactive or churning before. After the 2nd booking during the reactivation month, will be categorised as Active directly.
+
+We’ve also added the same 3 flags:
+
+- **Has the deal been booked in 1 month?**: If a deal has had a booking created in the current month
+- **Has the deal been booked in 6 months?**: If a deal has had a booking created in the past 6 months
+- **Has the deal been booked in 12 months?**: If a deal has had a booking created in the past 12 months
+
+# How does it look like?
+
+We’ve explored in great depth the theoretical aspect of this categorisation. Now it’s time to put some numbers on the page, so we can see the volumes associated to each stage in the Listing lifecycle.
+
+## Listing Lifecycle (as of 20th June 2024)
+
+| Listing Lifecycle State | Has been booked in 12 months? | Has been booked in 6 months? | Has been booked in 1 month? | Volume | Share |
+| --- | --- | --- | --- | --- | --- |
+| 01-New | No | No | No | 4667 | 3.26% |
+| 02-Never Booked | No | No | No | 95024 | 66.45% |
+| 03-First Time Booked | Yes | Yes | Yes | 1526 | 1.07% |
+| 04-Active | Yes | No | No | 9201 | 6.43% |
+| 04-Active | Yes | Yes | No | 14991 | 10.48% |
+| 04-Active | Yes | Yes | Yes | 6840 | 4.78% |
+| 05-Churning | No | No | No | 947 | 0.66% |
+| 06-Inactive | No | No | No | 9765 | 6.83% |
+| 07-Reactivated | Yes | Yes | Yes | 32 | 0.02% |
+
+*04-Active state total equals to 31032 listings, accounting for the **21.69%** of all listings.*
+
+**(!) Important**: The data displayed here is taking into account the screenshot information as of 20th June 2024. Keep in mind that these values will evolve every day.
+
+**(!) Disclaimer**: there are some listings in the database that appear without a user host id linked, which looks suspicious. Pending checking a potential data quality issue in this area, the figures could evolve if this needs to be fixed.
+
+## Deal Lifecycle (as of 20th June 2024)
+
+| Deal Lifecycle State | Has been booked in 12 months? | Has been booked in 6 months? | Has been booked in 1 month? | Volume | Share |
+| --- | --- | --- | --- | --- | --- |
+| 01-New | No | No | No | 24 | 1.56% |
+| 02-Never Booked | No | No | No | 233 | 15.12% |
+| 03-First Time Booked | Yes | Yes | Yes | 52 | 3.37% |
+| 04-Active | Yes | No | No | 172 | 11.16% |
+| 04-Active | Yes | Yes | No | 312 | 20.25% |
+| 04-Active | Yes | Yes | Yes | 605 | 39.26% |
+| 05-Churning | No | No | No | 17 | 1.10% |
+| 06-Inactive | No | No | No | 125 | 8.11% |
+| 07-Reactivated | Yes | Yes | Yes | 1 | 0.06% |
+
+*04-Active state total equals to 1089 deals, accounting for the 70.67**%** of all deals.*
+
+**(!) Important**: The data displayed here is taking into account the screenshot information as of 20th June 2024. Keep in mind that these values will evolve every day.
+
+**(!) Disclaimer**: not all hosts have a deal associated, thus the data represented here is a lower bound estimate of the reality. Pending improvement on setting the deal information to improve data quality.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md:Zone.Identifier b/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Listing & Deal lifecycle - 2024-07-29 4dc0311b21ca44f8859969e419872ebd.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md b/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md
new file mode 100644
index 0000000..7e7529d
--- /dev/null
+++ b/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md
@@ -0,0 +1,15 @@
+# Little Git SSH cloning trick
+
+When you try to git clone repos with SSH from our Azure DevOps environment, you will get URLs that look like this:
+
+[`guardhog@vs-ssh.visualstudio.com](mailto:guardhog@vs-ssh.visualstudio.com):v3/guardhog/Data/data-repo-blablabla`
+
+This string will not work due to some funny quirkiness from Microsoft to make our life’s more exciting and challenging.
+
+To fix it, replace the `vs-ssh.visualstudio.com` in the clone string with `ssh.dev.azure.com`.
+
+So, the original url and the replaced, final, working one would look like this:
+
+[`guardhog@vs-ssh.visualstudio.com](mailto:guardhog@vs-ssh.visualstudio.com):v3/guardhog/Data/data-repo-blablabla`
+
+[`guardhog@](mailto:guardhog@vs-ssh.visualstudio.com)[ssh.dev.azure.com](http://ssh.dev.azure.com/):v3/guardhog/Data/data-repo-blablabla`
\ No newline at end of file
diff --git a/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md:Zone.Identifier b/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Little Git SSH cloning trick 3d33758de34742b9ac180fd9c7b5e6b3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md b/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md
new file mode 100644
index 0000000..926657b
--- /dev/null
+++ b/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md
@@ -0,0 +1,169 @@
+# Migration Old Dash → New Dash impacts upon DWH
+
+Whenever an Old Dash account needs to be migrated to the New Dash, the following script is run on live:
+
+- [MigrateUserToNewDash].sql
+
+ ```sql
+ /****** Object: StoredProcedure [dbo].[MigrateUserToNewDash] Script Date: 06/03/2025 13:10:13 ******/
+ SET ANSI_NULLS ON
+ GO
+ SET QUOTED_IDENTIFIER ON
+ GO
+ ALTER PROCEDURE [dbo].[MigrateUserToNewDash]
+ @userId varchar(50)
+ AS
+ BEGIN
+
+ SET NOCOUNT ON
+
+ IF @userId IS NULL
+ BEGIN
+ SELECT 0 AS Success, 'UserId cannot be null' AS ProblemMessage, 'MigrateUserToNewDash has not received a userId' AS ErrorMessage;
+ RETURN
+ END
+
+ IF NOT EXISTS (SELECT 1 FROM dbo.Claim WHERE UserId = @userId AND ClaimType = 'Platform')
+ BEGIN
+ SELECT 0 AS Success, 'User does not have Platform ClaimType' AS ProblemMessage, 'MigrateUserToNewDash stored procedure requires a user with Platform ClaimType' AS ErrorMessage;
+ RETURN
+ END
+
+ BEGIN TRY
+
+ SET @userId = LOWER(@userId)
+ DECLARE @date datetime = GETUTCDATE()
+ DECLARE @platformRoleId UNIQUEIDENTIFIER
+ DECLARE @knowYourGuestRoleId UNIQUEIDENTIFIER
+ DECLARE @newDashVersion varchar(100)
+ DECLARE @pmsName varchar(200) = 'Unknown'
+
+ SET @platformRoleId = (SELECT Id FROM [Role] WHERE [Name] = 'Platform')
+ SET @knowYourGuestRoleId = (SELECT Id FROM [Role] WHERE [Name] = 'KnowYourGuest')
+ SET @newDashVersion = (SELECT AppSettingValue FROM dbo.AppSetting WHERE AppSettingKey = 'KygVersionForDataTeam')
+ SET @pmsName = (SELECT TOP 1 it.[Name] FROM integration.Integration i INNER JOIN integration.IntegrationType it ON i.IntegrationTypeId = it.Id WHERE SuperhogUserId = @userId and IsActive = 1 ORDER BY i.Id Desc)
+
+ DECLARE @accommodationIdsOfInterest TABLE(AccommodationId int)
+ DECLARE @bookingIdsOfInterest TABLE(BookingId int)
+ DECLARE @bookingViewIdsOfInterest Table (BookingViewId int)
+ DECLARE @verificationRequestIdsOfInterest TABLE(VerificationRequestId int)
+
+ INSERT INTO @accommodationIdsOfInterest
+ SELECT AccommodationId FROM AccommodationToUser WHERE SuperhogUserId = @userId
+
+ INSERT Into @bookingIdsOfInterest
+ SELECT
+ BookingId
+ FROM
+ Booking b
+ INNER JOIN
+ @accommodationIdsOfInterest a ON a.AccommodationId = b.AccommodationId
+ WHERE
+ NOT EXISTS (SELECT 1 FROM VerificationRequest WHERE Id = b.VerificationRequestId AND SuperhogUserId IS NOT NULL)
+ AND
+ IntegrationId IS NOT NULL
+ AND
+ CheckIn > @date
+ AND
+ ISNULL(Summary, '') <> 'AIRBNB_CANCELLATION'
+
+ INSERT INTO @bookingViewIdsOfInterest
+ SELECT Id FROM BookingView bv INNER JOIN @bookingIdsOfInterest boi ON bv.BookingId = boi.BookingId
+
+ INSERT INTO @verificationRequestIdsOfInterest
+ SELECT
+ Id
+ FROM
+ VerificationRequest
+ WHERE
+ Id IN (SELECT VerificationRequestId FROM Booking WHERE BookingId in (SELECT BookingId FROM @bookingIdsOfInterest))
+
+ BEGIN TRANSACTION
+
+ DELETE FROM dbo.Claim WHERE UserId = @userId and ClaimType = 'Platform'
+ DELETE FROM dbo.UserRole WHERE UserId = @userId and RoleId = @platformRoleId
+
+ IF @pmsName IS NOT NULL
+ BEGIN
+ INSERT INTO dbo.Claim (UserId, ClaimType, ClaimValue)
+ VALUES (@userId, 'KygRegistrationIntegrationTypeName', @pmsName);
+ END
+
+ INSERT INTO dbo.Claim (UserId, ClaimType, ClaimValue)
+ VALUES
+ (@userId, 'KygRegistrationSignUpType', 'KygFreemium'),
+ (@userId, 'NewDashMoveDate', CONVERT(VARCHAR(200), @date, 120)),
+ (@userId, 'KygSource', 'olddashboard'),
+ (@userId, 'NewDashVersion', @newDashVersion);
+
+ INSERT INTO dbo.UserRole (UserId, RoleId)
+ VALUES (@userId, @knowYourGuestRoleId);
+
+ INSERT INTO UserProductBundle([Name], DisplayName, DisplayOnFrontEnd, SuperhogUserId, ProductBundleId, StartDate, CreatedDate, UpdatedDate, ProtectionPlanId, ChosenProductServices)
+ SELECT pb.[Name], pb.DisplayName, pb.DisplayOnFrontEnd, @userId, pb.Id, @date, @date, @date, pb.ProtectionPlanId, p.RequiredProductServices
+ FROM ProductBundle pb
+ INNER JOIN ProtectionPlan pp ON pb.ProtectionPlanId = pp.Id
+ INNER JOIN Protection p ON p.Id = pp.ProtectionId
+ WHERE pp.EndDate IS NULL
+
+ UPDATE Booking SET VerificationRequestId = NULL WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+ UPDATE BookingView SET VerificationRequestId = NULL WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+ UPDATE PricePlanToUser SET ListingFeeNet = 0 WHERE SuperhogUserId = @userId AND StartDate < @date AND ((EndDate > @date) OR (EndDate is null))
+
+ DELETE FROM Verification WHERE VerificationRequestId IN (SELECT VerificationRequestId FROM @verificationRequestIdsOfInterest)
+ DELETE FROM VerificationRequestFeatureFlag WHERE VerificationRequestId IN (SELECT VerificationRequestId FROM @verificationRequestIdsOfInterest)
+ DELETE FROM VerificationRequest WHERE Id IN (SELECT VerificationRequestId FROM @verificationRequestIdsOfInterest)
+
+ DELETE FROM integration.StayImportToBooking WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+ DELETE FROM BookingToProductBundle WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+ DELETE FROM ScreeningToBooking WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+
+ DELETE FROM BookingViewToService WHERE BookingViewId IN (SELECT BookingViewId FROM @bookingViewIdsOfInterest)
+ DELETE FROM BookingView WHERE Id IN (SELECT BookingViewId FROM @bookingViewIdsOfInterest)
+
+ DELETE FROM Booking WHERE BookingId IN (SELECT BookingId FROM @bookingIdsOfInterest)
+
+ UPDATE Booking SET IsLegacy = 1 WHERE AccommodationId IN (SELECT AccommodationId FROM @accommodationIdsOfInterest)
+
+ COMMIT TRANSACTION
+
+ SELECT 1 AS Success, NULL AS ProblemMessage, NULL AS ErrorMessage
+ END TRY
+ BEGIN CATCH
+ IF @@TRANCOUNT > 0
+ BEGIN
+
+ ROLLBACK TRANSACTION
+ END
+
+ DECLARE @ErrorMessage NVARCHAR(4000)
+
+ SET @ErrorMessage = ERROR_MESSAGE()
+
+ SELECT 0 AS Success, 'Catch statement caught in MigrateUserToNewDash' AS ProblemMessage, ERROR_MESSAGE() AS ErrorMessage
+
+ END CATCH
+
+ END
+
+ ```
+
+
+This has the following impacts:
+
+### Data Drift on DWH
+
+- UPDATE lines: these should also set the `UpdatedDate` so we can capture the change automatically in DWH. This refers to lines 102 (`Booking`), 103 (`BookingView`), 104 (`PricePlanToUser`) and 119 (`Booking`). *Likely though Booking and BookingView records are deleted later so should not affect.*
+- DELETE lines: this is troublesome on DWH side as it definitely creates drift, but setting a full-refresh on Data side for each affected table every day will be very expensive. Best would be to wait for Pablo being back as he's the expert. This affects lines 106 (`Verification`), 107 (`VerificationRequestFeatureFlag`), 108 (`VerificationRequest`), 111 (`BookingToProductBundle`), 117 (`Booking`)
+
+### Missing historical data
+
+- `PricePlanToUser` should also modify the minimum listing fee to 0, to my understanding - unless this is somehow needed for New Dash but I doubt it
+- I'd personally prefer enddating the latest record of the user and inserting a new one with the needed values at 0; so we keep historical data of price plans per user unaltered - unless this is somehow affecting the logic for New Dash
+
+Worth mentioning that I don’t observe impacts on DWH due to:
+
+- DELETE on `Claim` & `UserRole` does not affect us as we refresh the tables every day.
+- DELETE on lines 110 (`StayImportToBooking`) and 112 (`ScreeningToBooking`) does not affect us as we do not have these tables in DWH.
+
+This problem is being handled here: [https://guardhog.visualstudio.com/Data/_workitems/edit/28243](https://guardhog.visualstudio.com/Data/_workitems/edit/28243)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md:Zone.Identifier b/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Migration Old Dash → New Dash impacts upon DWH 1ae0446ff9c9800eb0e6d02722031254.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md b/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md
new file mode 100644
index 0000000..c7f685c
--- /dev/null
+++ b/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md
@@ -0,0 +1,55 @@
+# New Dash - Data issues
+
+This is a summary of the different data issues that have been reported to the **prd-new-dash-reporting** slack channel since it’s creation on the 4th of September. It does not aim to cover the discussions on technical needs for reporting, even though these remain more or less the same requirements as 2 months ago.
+
+It’s worth mentioning that most of the back-and-forth on the reporting needs could have been solved by having technical documentation. This was first asked by Data Team at least since the 4th of September - and we still have not received it.
+
+# KYG Lite users migrated to New Dash broke reporting
+
+On Thursday 12th of September the New Dashboard reporting was displaying a huge quantity of Bookings (around 3k) not attributable to New Dash. This happened because the migration of KYG Lite users was not notified to Data Team despite we already anticipated, since September the 4th, that we needed clarity on this subject. As a result, a label on the reporting was put in place until the issue was fixed:
+
+
+
+On Friday 13th of September Ben R. created a temporary Claim called MVPMigratedUser that allow us to properly track the second batch of migrated users, and Data Team effectively hardcoded the migration date of KYG Lite users to the real migration date.
+
+The incident was resolved on Monday 16th of September.
+
+# First duplicated booking in BookingToProductBundle
+
+On the 4th of October a data alert was triggered due to a duplicated booking appearing in the table BookingToProductBundle, coming from Clay’s test account.
+
+After discussing the logic with Clay and Dagmara, it was clear that the alert raised was a miss implementation on Data side - it was the first time since the creation of the report that a user (Clay, in this case) had overridden a booking that had a Basic Screening to change to Basic Program. After this change, the logic on Data side does not allow for a Booking to appear twice in the BookingToProductBundle table if both records are supposed to be “active” - meaning, no EndDate.
+
+# Second duplicated booking in BookingToProductBundle
+
+On the 23th of October, another data alert was raised. In this case, it was a true duplicate since both appeared to be active at the same time (no EndDate):
+
+
+
+After a bit of back and forth, a manual update conducted by Gus in a meeting with Uri and Daga fixed the faulty record on the 25th of October by just EndDate-ing the first created record.
+
+# Unfixed issue on Booking duplicates
+
+On Wednesday 30th of October, we received the same alert on duplicated bookings in BookingToProductBundle with no end date in the middle of the day after some manual test execution on Data side. This was the first day that V2 went live.
+
+A total of 8 bookings (16 records) were faulty. All these records were created on the 30th of October, likely being linked to the tech release.
+
+We’ve been receiving alerts in every DWH job run due to this issue until Friday 8th of November. This has reached a total of 640 duplicated records that likely affected the reliability of the data displayed in the reporting within the 30th October to Friday 8th November.
+
+# Disappearing records in BookingToProductBundle
+
+On Wednesday 6th of November, while re-issuing the fact that there’s a bug in production once again, Data Team noticed we had a data drift in the ingestion of data from BookingToProductBundle - DWH was containing records that didn’t exist in the backend.
+
+After discussing the details of the backend table with Yaseen, it was clear that some records in the backend “disappeared”. An example of a Booking that missed history was studied - the user of the booking is HomeToHost account.
+
+ After investigation from Yaseen side, the potential root causes are:
+
+> there are a number of reasons why this could have occurred, and we have been having DB issues which Ben has been investigating so suspect that was the cause. There was a release that day and that involved a webhook issue that was causing performance issues but unclear if fixing this or other changes being made by Ben resolved
+>
+
+after deep-diving more, Yaseen pointed out:
+
+> the bookingid in your pic (931223) and all the others by that user *[HomeToHost]* were not done via KYG Application but manually via database as a workaround to a performance issue. So that may be the cause.
+>
+
+On Thursday 7th of November, Uri sends the DWH history for that user to the Dash Squad. The issue is still active at the moment of writing.
\ No newline at end of file
diff --git a/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md:Zone.Identifier b/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/New Dash - Data issues 1370446ff9c980738a36fd7323b93340.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md b/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md
new file mode 100644
index 0000000..a2eaba6
--- /dev/null
+++ b/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md
@@ -0,0 +1,13 @@
+# New Dash - Staging
+
+This guide explains how to navigate to the New Dashboard, to understand how it looks like from a user host point of view. This only works for the Staging version of New Dashboard.
+
+Steps:
+
+- Navigate to [https://kyg-web-dev001.azurewebsites.net/auth/login](https://kyg-web-dev001.azurewebsites.net/auth/login)
+- Use the credentials available in the Data Shared Folder in Keeper, by the name **Truvi - Staging New Dashboard - Shared**
+- Search for the user Data Team Hijacked Test Account (or use the email **`loutestgbp@gmail.com` )** and impersonate it. You can play and mess around as much as you want with this user.
+
+
+
+Keep in mind that other users might be used for testing purposes for other colleagues. Stick to the default user specified above.
\ No newline at end of file
diff --git a/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md:Zone.Identifier b/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/New Dash - Staging 1eb0446ff9c9803dae5bfd03958a76ad.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md b/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md
new file mode 100644
index 0000000..cabd635
--- /dev/null
+++ b/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md
@@ -0,0 +1,15 @@
+# New Pricing + New Dashboard (from Data POV)
+
+[Data Requirements for New Dash/New Pricing](Data%20Requirements%20for%20New%20Dash%20New%20Pricing%201420446ff9c980eaab6ec6cb02714557.md)
+
+[Services and Revenue modelling](Services%20and%20Revenue%20modelling%201420446ff9c980118e0cfffa7c41f369.md)
+
+[New Dash - Data issues](New%20Dash%20-%20Data%20issues%201370446ff9c980738a36fd7323b93340.md)
+
+[2024-10-02 - Integrating New Dashboard & New Pricing into DWH](2024-10-02%20-%20Integrating%20New%20Dashboard%20&%20New%20Prici%201130446ff9c9804a9cb2f5d49e073bab.md)
+
+First exploration conducted, including minimum reporting for MVP (might be outdated).
+
+[Retrieving New Dash MVP info](Retrieving%20New%20Dash%20MVP%20info%2037429e2b559e492a881c088bdba5ad80.md)
+
+[Migration Old Dash → New Dash impacts upon DWH](Migration%20Old%20Dash%20%E2%86%92%20New%20Dash%20impacts%20upon%20DWH%201ae0446ff9c9800eb0e6d02722031254.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md:Zone.Identifier b/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/New Pricing + New Dashboard (from Data POV) 1130446ff9c980ea8790e6ab500d3683.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md b/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md
new file mode 100644
index 0000000..5526dda
--- /dev/null
+++ b/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md
@@ -0,0 +1,104 @@
+# Onboarding checklist - Joaquín
+
+Welcome to Superhog!
+
+This is a rough checklist on stuff you might need/want to do as part of your start in Superhog and the Data Team.
+
+Feel free to make a copy of this page so you can track your own progress.
+
+Please do contribute adding missing stuff and removing outdated references so the checklist stays in good shape. Your future colleagues will be grateful.
+
+---
+
+- [x] Get a @superhog.com user and email (contact Ben Robinson/Will Cole)
+- Equipment checklist
+ - [x] Laptop
+ - You probably want something with an i7 and at least 32gb of RAM. If the device you received has lower specs, get in touch with Pablo so we can look for a solution.
+ - [x] Headset
+ - [ ] Mouse
+ - Need anything else? Contact Will Coley.
+- Apps, accesses and permissions
+ - [x] Keeper Security Account (contact Mike Hayward). You probably want to make this one first if possible so that you can start keeping all your accesses and passwords tidy from minute one. Not compulsory, just a tip.
+ - [x] Also, ask Pablo to get added to Shared Data Folder in Keeper so you have access to shared credentials from the Data Team.
+ - [x] You will also need an Authenticator app in your phone for the 2 factor authentication of some apps. Microsoft Authenticator is a sensible option.
+ - [x] Outlook and Teams (contact Ben Robinson)
+ - [x] Set up your Onedrive to keep all your files backed up
+ - [x] Get access to Slack
+ - You probably also want to join the following channels
+ - [x] #data
+ - [x] #data-alerts
+ - [x] #data-team-internal
+ - [x] #all-staff
+ - [x] #bcn-crew
+ - [x] Get VPN credentials for the Data Platform (contact Pablo Martin)
+ - [x] Get access to Confluence (contact Ben Robinson)
+ - [x] Get access to Miro (contact Ben Cotte)
+ - [x] Get access to Product Board (contact Ben Cotte)
+ - [x] Get access to Notion (contact Ben Cotte)
+ - [x] Get access to Mixpanel (contact Ben Cotte)
+ - [x] Get access to Azure Devops (contact Ben Robinson)
+ - [x] And ask Pablo to be added to the Data project and any relevant repositories
+ - [x] Get access to Sage HR (contact Will Coley)
+
+ You will also get some tasks to complete directly in Sage HR:
+
+ - [x] Submit bank account details to HR
+ - [x] Read employee handbook
+ - [x] Complete personal details
+ - [x] Complete emergency contacts
+ - [x] Submit birthday
+ - [x] Display screen equipment
+ - [x] Get access to PowerBI (contact Pablo Martin)
+ - [ ] Get access to Hubspot (contact Alex Anderson)
+ - [x] Get access to Culture Amp (contact Ben Cotte)
+ - [x] Check our document templates at [https://guardhog.sharepoint.com/sites/Guardhoggroup/Document Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F7. ALL STAFF%2FPowerPoint Templates&p=true&ga=1](https://guardhog.sharepoint.com/sites/Guardhoggroup/Document%20Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F7%2E%20ALL%20STAFF%2FPowerPoint%20Templates&p=true&ga=1)
+ - [x] Check our Brand Guidelines here: [https://guardhog.sharepoint.com/sites/Guardhoggroup/Document Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F3. MARKETING%2FBranding%2FBrand Guidelines PDF%2FBrandBook_2023_SH%26KYG.pdf&parent=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F3. MARKETING%2FBranding%2FBrand Guidelines PDF&p=true&ga=1](https://guardhog.sharepoint.com/sites/Guardhoggroup/Document%20Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F3%2E%20MARKETING%2FBranding%2FBrand%20Guidelines%20PDF%2FBrandBook%5F2023%5FSH%26KYG%2Epdf&parent=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F3%2E%20MARKETING%2FBranding%2FBrand%20Guidelines%20PDF&p=true&ga=1)
+- BCN Office specifics
+ - Norskken
+ - Wifi details:
+ - Network: Norrsken Member
+ - Password: 4impact.unicorns!
+ - [x] Get included in the Norrsken Slack workspace
+ - [x] Download the Salto KS app in your phone to open doors. You should also receive instructions in your email to be able to do this.
+ - [ ] You can book meeting rooms here: [https://norrsken-barcelona.officernd.com/calendar?office=5d1bcda0dbd6e40010479eec](https://norrsken-barcelona.officernd.com/calendar?office=5d1bcda0dbd6e40010479eec)
+ - [ ] Someone visiting? List them up here so they get access at the entrance: [https://norrsken-barcelona.officernd.com/account/visitors](https://norrsken-barcelona.officernd.com/account/visitors)
+ - [ ] You can request a Locker for yourself through this form: [https://norrsken.typeform.com/lockers](https://norrsken.typeform.com/lockers)
+ - Other interesting things
+ - Honest Greens is a nice option to have lunch or take it away and eat at Norskken ([https://maps.app.goo.gl/mWMVjjaiBg9pyvzj7](https://maps.app.goo.gl/mWMVjjaiBg9pyvzj7))
+ - You might be interested in this very cool Cooltra offering to move around BCN: [https://norrsken-barcelona.officernd.com/benefits](https://norrsken-barcelona.officernd.com/benefits)
+- [x] Say hi to everyone! You can introduce yourself at the slack channel #all-staff.
+- Get familiar with documentation from the Data team
+ - [x] Crawl through our [Data News](https://www.notion.so/Data-News-7dc6ee1465974e17b0898b41a353b461?pvs=21) to know what was happening before you joined
+ - [ ] Go through the [Data Catalogue](https://www.notion.so/Data-Catalogue-78d91434aa1442cbb6cc13b73c7fb664?pvs=21) to familiarize yourself with the existing data sources and data products in Superhog
+ - [ ] Check through the [Data Team internals](Data%20Team%20Internals%20cf0d13d49c9643a987527e1fe2f65d49.md) to know more about how we work (tends to be WIP, so don’t be scared if something looks unfinished)
+
+---
+
+- Intro to Superhog
+ - General Business and Products
+ - Ben C. (overview)
+ - Joan Tomas
+ - All guest products
+ - Guest journey
+ - Lou Dowds/Dagmara Bujak
+ - Internal Dashboards
+ - Host experience
+ - Ana de Vega
+ - APIs
+ - Technology
+ - Superhog Platform - Ben Robinson
+ - CRM - Hubspot - Alex Anderson
+ - Stripe (plataforma de pago) → Pablo
+ - Mixpanel → Louise Dowds & Joan Tomas
+ - People
+ - Marketing & Sales
+ - Leo & Matt & Beth ****→ RevOps and Sales leadership
+ - Finance
+ - Jamie Deeson → Finance Wizard
+ - Other
+ - Humphrey
+- Intro to Data
+ - Data Platform
+ - DWH
+ - PBI
+ - dbt project
\ No newline at end of file
diff --git a/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md:Zone.Identifier b/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Onboarding checklist - Joaquín 0f979c6139114f96b9a37bd709edf09c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md b/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md
new file mode 100644
index 0000000..b0ff8ab
--- /dev/null
+++ b/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md
@@ -0,0 +1,111 @@
+# Onboarding checklist
+
+Welcome to Superhog!
+
+This is a rough checklist on stuff you might need/want to do as part of your start in Superhog and the Data Team.
+
+Feel free to make a copy of this page so you can track your own progress.
+
+Please do contribute adding missing stuff and removing outdated references so the checklist stays in good shape. Your future colleagues will be grateful.
+
+---
+
+- [ ] Get a @superhog.com user and email (contact Ben Robinson/Will Cole)
+- Equipment checklist
+ - [ ] Laptop
+ - You probably want something with an i7 and at least 32gb of RAM. If the device you received has lower specs, get in touch with Pablo so we can look for a solution.
+ - [ ] Headset
+ - [ ] Mouse
+ - Need anything else? Contact Will Coley
+- Apps, accesses and permissions
+ - [ ] Keeper Security Account (contact Mike Hayward). You probably want to make this one first if possible so that you can start keeping all your accesses and passwords tidy from minute one. Not compulsory, just a tip.
+ - [ ] Also, ask Pablo to get added to Shared Data Folder in Keeper so you have access to shared credentials from the Data Team.
+ - [ ] You will also need an Authenticator app in your phone for the 2 factor authentication of some apps. Microsoft Authenticator is a sensible option.
+ - [ ] Outlook and Teams (contact Ben Robinson)
+ - [ ] Set up your Onedrive to keep all your files backed up
+ - [ ] Get access to Slack
+ - You probably also want to join the following channels
+ - [ ] #data
+ - [ ] #data-alerts
+ - [ ] #data-team-internal
+ - [ ] #all-staff
+ - [ ] #bcn-crew
+ - [ ] Get added to the data-team group
+ - [ ] Get VPN credentials for the Data Platform (contact Pablo Martin)
+ - [ ] Get access to Confluence (contact Ben Robinson)
+ - [ ] Get access to Miro (contact Ben Cotte)
+ - [ ] Get access to Product Board (contact Ben Cotte)
+ - [ ] Get access to Notion (contact Ben Cotte)
+ - [ ] Get access to Mixpanel (contact Ben Cotte)
+ - [ ] Get access to Azure Devops (contact Ben Robinson)
+ - [ ] And ask Pablo to be added to the Data project and any relevant repositories
+ - [ ] Get access to Sage HR (contact Will Coley)
+
+ You will also get some tasks to complete directly in Sage HR:
+
+ - [ ] Submit bank account details to HR
+ - [ ] Read employee handbook
+ - [ ] Complete personal details
+ - [ ] Complete emergency contacts
+ - [ ] Submit birthday
+ - [ ] Display screen equipment
+ - [ ] Get access to PowerBI (contact Pablo Martin)
+ - [ ] Get access to Hubspot (contact Alex Anderson)
+ - [ ] Get access to Culture Amp (contact Ben Cotte)
+ - [ ] Get access to the Data Sharepoint (contact Pablo or Uri)
+ - [ ] Check our document templates at [https://guardhog.sharepoint.com/sites/Guardhoggroup/Document Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F7. ALL STAFF%2FPowerPoint Templates&p=true&ga=1](https://guardhog.sharepoint.com/sites/Guardhoggroup/Document%20Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F7%2E%20ALL%20STAFF%2FPowerPoint%20Templates&p=true&ga=1)
+ - [ ] Check our Brand Guidelines here: [https://guardhog.sharepoint.com/sites/Guardhoggroup/Document Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F3. MARKETING%2FBranding%2FBrand Guidelines PDF%2FBrandBook_2023_SH%26KYG.pdf&parent=%2Fsites%2FGuardhoggroup%2FDocument Centre%2FSUPERHOG%2F3. MARKETING%2FBranding%2FBrand Guidelines PDF&p=true&ga=1](https://guardhog.sharepoint.com/sites/Guardhoggroup/Document%20Centre/Forms/AllItems.aspx?id=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F3%2E%20MARKETING%2FBranding%2FBrand%20Guidelines%20PDF%2FBrandBook%5F2023%5FSH%26KYG%2Epdf&parent=%2Fsites%2FGuardhoggroup%2FDocument%20Centre%2FSUPERHOG%2F3%2E%20MARKETING%2FBranding%2FBrand%20Guidelines%20PDF&p=true&ga=1)
+- BCN Office specifics
+ - Norskken
+ - Wifi details:
+ - Network: Norrsken Member
+ - Password: 4impact.unicorns!
+ - [ ] Check quarantined emails too allow delivery from Norrsken
+ - [ ] Get included in the Norrsken Slack workspace
+ - [ ] Download the Salto KS app in your phone to open doors. You should also receive instructions in your email to be able to do this.
+ - [ ] You can book meeting rooms here: [https://norrsken-barcelona.officernd.com/calendar?office=5d1bcda0dbd6e40010479eec](https://norrsken-barcelona.officernd.com/calendar?office=5d1bcda0dbd6e40010479eec)
+ - [ ] Someone visiting? List them up here so they get access at the entrance: [https://norrsken-barcelona.officernd.com/account/visitors](https://norrsken-barcelona.officernd.com/account/visitors)
+ - [ ] You can request a Locker for yourself through this form: [https://norrsken.typeform.com/lockers](https://norrsken.typeform.com/lockers)
+ - Other interesting things
+ - Honest Greens is a nice option to have lunch or take it away and eat at Norskken ([https://maps.app.goo.gl/mWMVjjaiBg9pyvzj7](https://maps.app.goo.gl/mWMVjjaiBg9pyvzj7))
+ - You might be interested in this very cool Cooltra offering to move around BCN: [https://norrsken-barcelona.officernd.com/benefits](https://norrsken-barcelona.officernd.com/benefits)
+- [ ] Say hi to everyone! You can introduce yourself at the slack channel #all-staff.
+
+---
+
+- Intro to Superhog
+ - Business
+ - Wilbur+Dashboard+Guest Journey (some Product Manager, let’s discuss who’s best for this)
+ - Products
+ - Technology
+ - Superhog Platform - Ben Robinson
+ - CRM - Hubspot - Alex Anderson
+ - Stripe → Pablo
+ - Mixpanel → Louise Dowds
+ - People
+ - Product
+ - Ben C. → Head of Product
+ - All the other PMs
+ - Marketing & Sales
+ - Leo & Matt → RevOps and Sales leadership
+ - Alex Anderson → Business Systems / CRM
+ - Beth → Marketing Manager
+ - Finance
+ - Suzannah → Finance Director
+ - Jamie Deeson → Finance Wizard
+ - Other
+ - Humphrey
+ - Engineering
+ - Ben Robinson → CTO
+ - Gus, Lawrence, Ray → Senior Devs
+- Intro to Data
+ - Data Platform
+ - DWH
+ - PBI
+ - dbt project
+ - Existing Data Products and Data Sources [Data Catalogue](https://www.notion.so/Data-Catalogue-78d91434aa1442cbb6cc13b73c7fb664?pvs=21)
+ - Go through our History: [Data News](https://www.notion.so/Data-News-7dc6ee1465974e17b0898b41a353b461?pvs=21)
+- Organize the team
+- Do stuff
+
+[Onboarding checklist - Joaquín](Onboarding%20checklist%20-%20Joaqui%CC%81n%200f979c6139114f96b9a37bd709edf09c.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md:Zone.Identifier b/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Onboarding checklist d5eb8cb36b404fc9a0ccacddf9862001.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md b/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md
new file mode 100644
index 0000000..0abfd24
--- /dev/null
+++ b/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md
@@ -0,0 +1,5 @@
+# Orchestration Engine Project
+
+As part of Q4 planning, we decided to finally go and pick and deploy an orchestration engine for our data platform.
+
+[Choosing](Choosing%20b305d0910ef446578cc28c3b79042ea1.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md:Zone.Identifier b/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Orchestration Engine Project a10527d3c6144b58baf202cbeb657daa.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md b/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md
new file mode 100644
index 0000000..6fa34e0
--- /dev/null
+++ b/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md
@@ -0,0 +1,22 @@
+# Our repos & monitoring
+
+# Repositories
+
+An overview of the most important git repositories in the Data Team.
+
+- [dbt](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project): the monorepo where all the dbt models for our DWH live.
+- [Power BI reports](https://guardhog.visualstudio.com/Data/_git/data-pbi-reports): the monorepo where we store the files for our Power BI reports.
+- [sql snippets](https://guardhog.visualstudio.com/Data/_git/data-sql-snippets): a bit of a shared dumpster fire to hold interesting queries.
+- [infra](https://guardhog.visualstudio.com/Data/_git/data-infra-script): an explainer on how infra looks like and how to deploy it.
+- [sh-invoicing](https://guardhog.visualstudio.com/Data/_git/data-invoicing-exporter): a little CLI to export invoicing related data for the finance team each month.
+
+Also, some interesting repos from our engineering colleagues:
+
+- [superhog monolith](https://guardhog.visualstudio.com/Superhog/_git/superhog-mono-app): the main monolith for the backend.
+ - The most interesting part is the [Data section](https://guardhog.visualstudio.com/Superhog/_git/superhog-mono-app?path=/Guardhog.Data), which contains all the migrations and DDL that shapes the [Core database](https://www.notion.so/Superhog-Core-Database-70786af3075e46d4a4e3ce303eb9ef00?pvs=21).
+
+# Monitoring
+
+An overview of the key monitoring to ensure the main Data services are running properly
+
+- [SLIs](https://portal.azure.com/#@guardhog.com/dashboard/arm/subscriptions/f022c1f3-f93a-426f-ada3-8260295cf84f/resourcegroups/dashboards/providers/microsoft.portal/dashboards/a0d5850b-669e-46f4-81f9-bdf521ff4530): Resource consumption of the DWH and VMs availability (Production environment)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md:Zone.Identifier b/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Our repos & monitoring 601c169a59e4469ca38d5493a6356bc1.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md b/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md
new file mode 100644
index 0000000..964ddf2
--- /dev/null
+++ b/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md
@@ -0,0 +1,38 @@
+# PBI: Switch table from Import to DirectQuery
+
+## Situation
+
+Generally, we like to have all queries pointing at the DWH from PBI reports in `DirectQuery` mode. This is preferable for a number of reasons, which I won’t cover here to keep things brief.
+
+It might happen that sometimes, for some reason, a report ends up being shipped to production with some query to DWH being set in `Import` mode. Most common one is just not noticing (been there, done that, no worries).
+
+## Issue
+
+This used to be a massive hassle because:
+
+- Once the query is created, PBI won’t allow you to change from one mode to the other.
+- This means you can only delete the existing query/table and recreate it.
+- Doing this means that everything that’s built on top of of this query (measures, visuals, etc) will go to hell and you’ll have to rebuild it manually. On any report that’s not trivial, this is time consuming and extremely error prone. A clear no-no.
+
+## Solution
+
+The solution is to do some good old hacking. You can change the query mode by modifying the source files of the PBI report. This is not supported by PBI and is clearly experimental, so be careful and triple-check once you’ve done it to make sure nothing blew up.
+
+Steps to achieve:
+
+1. Let’s imagine we are working on report `MySweetReport` which contains the query `MyAwfulQuery`. The query is currently in `Import` mode and we want so switch it to `DirectQuery`.
+2. You are either working on a git branch already. If not, definitely create one.
+3. Make sure your report is NOT open in PBI Desktop.
+4. On your branch, make sure to commit or revert all pending changes. Basically, you want to do this from a stable report situation, so you can rollback decently if shit hits the fan.
+5. Once your git state is blank, look for the report you’re working with in the repo, and open the file `MySweetReport.dataset/model.bim`.
+6. This monster JSON has a gazillion things, so let me explain a bit the structure before we pull out our surgery knife:
+ 1. The root element contains a key called `tables` . `tables` contains an array, with one entry for every query you have in your PBI report. Each entry has a `name`, which should match the name you can see in PBI Desktop.
+ 2. Each `table` entry will have another key inside called `partitions`, which contains an array. I’ve always found this array to only have one element. I’m not sure under which circumstances it might contain multiple elements. If that’s what you see, I can’t help you, you’re on your own, good luck 🫡.
+ 3. The only entry within `partitions` will have an entry named `mode`. This can either be `import` or `directQuery`.
+7. To switch from `import` to `directQuery`... Just replace `import` with `directQuery`.
+8. Make a commit just with this change so you can easily isolate it in git.
+9. After this, review the report and triple-check everything is working as expected.
+
+Done!
+
+If you encounter different situations, things don’t work as described here, you find out more hacky tricks around this… Feel free to enrich this page.
\ No newline at end of file
diff --git a/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md:Zone.Identifier b/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/PBI Switch table from Import to DirectQuery 1210446ff9c98027b620f441f083a588.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md b/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md
new file mode 100644
index 0000000..4d89246
--- /dev/null
+++ b/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md
@@ -0,0 +1,186 @@
+# Payment Validation Set data problems
+
+# Summary
+
+- We can’t tell what paid services or what prices were offered in any past Guest Journey.
+- We can only know what services and prices are set right now (you can find some explainers and queries below on how to achieve this).
+- This won’t change until . Even in that case, past history is gone for good and we can’t rebuild it.
+
+# Research
+
+While working on Joan’s request 19459, where we were trying to obtain guests that selected `FeewithDeposit` from certain hosts that have the option available in their listings, we discovered that we couldn’t make the join between `live.dbo.PaymentValidationSet` and `live.dbo.VerificationRequest`.
+
+Taking a closer look to these tables we encountered many `NULL` values inside`live.dbo.VerificationRequest` for `PaymentValidationSetId`, in the following query result it shows that around 99% of the values are missing.
+
+**select** PaymentValidationSetId, **count**(*)
+
+**from** VerificationRequest *vr*
+
+**group** **by** PaymentValidationSetId
+
+
+
+Here is also the query used to obtain the `PaymentValidationSetId` with `FeewithDeposit` active
+
+**select** *
+
+**from** PaymentValidationSet *pvs*
+
+**left** **join** PaymentValidationSetToCurrency *pvstc* **on** *pvstc*.PaymentValidationSetId = *pvs*.Id
+
+**where** pvstc.DisabledValidationOptions & 4 <= 0
+
+**and** pvs.IsActive = 1
+
+**and** pvstc.CurrencyIso = **'GBP'**
+
+## How it works and how to get the data
+
+This `PaymentValidationSetId` inside `live.dbo.VerificationRequest` was added by someone that didn’t backfill the data and they are not sure how correct it is [https://guardhog.visualstudio.com/Superhog/_workitems/edit/13564](https://guardhog.visualstudio.com/Superhog/_workitems/edit/13564).
+
+Currently the way we can obtain the data of options available that the hosts is setting for each accommodation is looking into `PaymentValidationSetId` inside `live.dbo.Accommodation`, **which unfortunately will only give the current `PaymentValidationSetId` so we won’t have any history**. In case this value is `NULL` then is using the default `PaymentValidationSetId` (**that it could be modified**), otherwise it uses a custom version payment.
+
+In case is using the default `PaymentValidationSetId` , we have to check for all the default versions on the `live.dbo.PaymentValidationSetId` table which can be found by the `SuperhogUserId = Booking.CreatedByUserId or SuperhogUserId is NULL` and use the version that would have been active for this booking when created (`Booking.CreatedDate` )
+
+Using the query given by Lawrence we can obtain the options available for each of these `PaymentValidationSetId` , though this is still for confirmation on if it is working correctly.
+
+**Example:**
+
+1- First for a random **BookingId=738395**
+
+**`select** *`
+
+**`from** Booking *b*`
+
+**`where** BookingId =738395`
+
+`--2024-06-01 CreatedDate`
+
+`--760891 VerificatioRequestId`
+
+`--75feb1f7-848d-4394-aa08-c8d82f34f80e CreatedByUserId`
+
+`--85757 AccommodationId`
+
+We get the **AccommodationId, VerificationRequestId, CreatedByUsedId (Host)** and `CreatedDate`
+
+2- We check the **PaymentValidationId** inside **Accommodation**
+
+**`select** * **from** Accommodation a`
+
+**`where** AccommodationId = 85757`
+
+
+
+**PaymentValidationSetId** is set to NULL so it uses the default version of **PaymentValidationSet**
+
+3- We obtain the data from **PaymentValidationSet**
+
+**`select** *`
+
+**`from** PaymentValidationSet pvs`
+
+**`where** SuperhogUserId = **'75feb1f7-848d-4394-aa08-c8d82f34f80e'** **or** SuperhogUserId **is** **null**`
+
+
+
+Here we can see that it has multiple versions for the default payment being the most recent one **Id=2186** and it was also created before the **booking.CreatedDate,** but it is not active. So in this case the **PaymentValidationSet** is **Id=1**
+
+FYI This is also a weird case because **booking.CreatedDate** is before the **PaymentValidationSet.UpdatedDate** of **Id=2186,** so it makes sense that it uses that one but we cannot confirm it
+
+4- We use Lawrence’s query to obtain Payment options for the previously obtained **PaymentValidationSet**
+
+```sql
+SELECT
+PaymentValidationSet ,
+CASE WHEN DisabledValidationOptions & 1 > 0 THEN 0 ELSE 1 END AS "Fee(1)",
+CASE WHEN DisabledValidationOptions & 2 > 0 THEN 0 ELSE 1 END AS "Membership(2)",
+CASE WHEN DisabledValidationOptions & 4 > 0 THEN 0 ELSE 1 END AS "FeeWithDeposit(4)",
+CASE WHEN DisabledValidationOptions & 8 > 0 THEN 0 ELSE 1 END AS "Waiver(8)",
+CASE WHEN DisabledValidationOptions & 16 > 0 THEN 0 ELSE 1 END AS "NoCover(16)"
+FROM
+PaymentValidationSetToCurrency pvstc
+WHERE
+CurrencyIso = 'GBP'
+and PaymentValidationSetId = 1
+```
+
+5- Finally we can verify that the Payment chosen by the guest coincides with the available options for this **PaymentValidationSet**
+
+**`select** Value`
+
+**`from** Verification v`
+
+**`where** Name = **'PaymentValidation'**`
+
+**`and** VerificationRequestId = 760891`
+
+## Current explanation and work from Development Team
+
+There are 4 levels of settings when it comes the payment validation sets
+
+- Global level
+- Account/User level
+- Listing Level
+- Host&Stay override
+
+**Global Level PaymentValidationSet**
+
+- these were set in the db when we first implemented payment
+ - in the db there is one row with SuperhogUserId == null, this is the Global PaymentValidationSet
+- where can we see them? **(nowhere - yet)**
+
+Rules
+
+- If no override has been set for an account, all guest journeys created via this account use the Global PaymentValidationSet
+
+**Account Level PaymentValidationSet**
+
+- Set in Wilbur
+- When a host account is set up and the user navigates to the Wilbur -> Verification Terms tab, they will see their account set. If no previous account set is saved, they will see the Global set by default
+- When they press save (regardless of if any actual changes have been made) this account will now have an account override set.
+- Each time they save, the old set is expired and a new set is created so we can track history
+
+Rules
+
+- If an account level PaymentValidationSet has been set for an account (AND NO OTHER SET exists - e.g. listing level or host and stay), all guest journeys created via this account will use this Account Level PaymentValidationSet
+
+**Listing Level PaymentValidationSet**
+
+- A host can use their dashboard to create one or more PaymentValidationSets and then optionally apply these to one or more listings
+- Part 1 - setting up and editing listing level paymentvalidationsets
+ - Load host dashboard and go to account -> Deposit/Waiver Price Plans
+ - The default override (account or global level, whichever applies) will come first and will not be editable
+ - All other sets are listing level sets. You can add new ones and edit existing ones (cant yet delete)
+- Part 2 - using the listing level paymentvalidationsets
+ - Now that you have some listing level paymentvalidationsets, you can now apply them to individual listings
+ - Go to the dashboard and go to Listings
+ - you can use the dropdown to set a price plan for each listing.
+ - by default, all listings will be assigned the default paymentvalidationset. when you choose another paymentvalidationsets from the dropdown dev then this paymentvalidationsets will be set against that listings
+ - Dev Note: if you dont select one, or change it back to default the listing will simply have a NULL PaymentValidationSetId - this is how we apply the default
+ - **Dev Note (nice to have)**: should really a save notification when these are applied
+
+Rules
+
+If a booking is created against a listing with a listing paymentvalidationsets, then the payment values that are used are from that listing paymentvalidationsets. If the listing does not have one, then the default is used (Account level first, global if none of the above)
+
+**Host&Stay override**
+
+With Host & Stay, lets say
+
+- there were 100 guest journeys already existing
+- the Existing Account/User level PaymentValidationSet existed lets say PaymentValidationSetId = 456
+- **All existing guest journeys need to use 456, end of story - regardless of any subsequent changes**
+- **All guest journeys created after this point need to follow the normal rules**
+
+This is the work that Yaseen did as part of sprint 39. Unfortunately it didnt work very well as this work exposed an existing bug - we were under the mistaken impression that that because no listing level paymentvalidationsets were applied against any Host&Stay listings (**but two did actually exist**) - this exposed an error in the code whereby we just retrieved all of the paymentvalidationsets for that host and just simple picked the top paymentvalidationset without applying any other filter. We need to fix this because it's wrong (!)
+
+So, three things need to be done
+
+- fix the bug that host&stay exposed
+- remove duplicate code which handles paymentvalidationsets and have a single source of truth
+- unit test and end to end test to make sure all rules are followed correctly
+
+## Upcoming Changes
+
+In future work there is a plan to add the PaymentValidationSet to VerificationRequest for easier connection between as well as having some historic data for each of them, which is inexistent today. Also a simplification as to how to get the available payment options set by the host in their PaymentValidationSet, so we don’t have to depend on a complex query with virtually no explanation as to how it works
\ No newline at end of file
diff --git a/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md:Zone.Identifier b/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Payment Validation Set data problems 2382b2ecb24243449caac4687f044391.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md b/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md
new file mode 100644
index 0000000..6a2194b
--- /dev/null
+++ b/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md
@@ -0,0 +1,56 @@
+# Q1 Data Scopes proposal
+
+With Q4’24 coming to an end, it’s time to plan our priorities and goals for Q1.
+
+This document outlines our proposal for next quarter. It’s a working document to support our TMT alignment on 2024-12-12.
+
+## Executive Summary
+
+During Q1 we want to:
+
+- Ensure continuity and accuracy in invoicing processes amidst tooling transitions and new setups, avoiding disruptions and errors.
+- Provide ongoing support for business needs through ad-hoc data requests and maintaining critical reporting functions.
+- Strengthen support for data-driven decision-making by enhancing Account Management reporting, A/B test monitoring, and KPI implementations.
+
+Due to limited capacity in Q1, including no availability for Data Engineering work, we are focusing on a smaller set of high-impact objectives. We remain open to discussing priorities and adapting the scope to ensure alignment with the company’s most pressing needs.
+
+## Main body
+
+You can find below an exhaustive list of scopes that we consider are candidates for Q1 priorities. We have bucketed them in three priority levels, with 1 being the highest and 3 being the lowest.
+
+We think it’s not realistic to expect all of them to be completed in Q1. Our estimates:
+
+- We think the priority 1 group is fully doable.
+- On top of that, we think it’s highly likely that we achieve priorities 2 as well.
+- Finally, we believe we won’t be able to complete all priority 3. It might even be that we don’t get to them at all.
+
+| Area | Item | Priority | Definition of Done | Value obtained | Comments |
+| --- | --- | --- | --- | --- | --- |
+| Finance | Support during invoicing tooling changes (Hyperline) | 1 | Data team maintains and grows revenue reporting pipelines with the new Hyperline setup.
+Also support preparing line items for API invoicing. | We ensure continuity and consistent KPIs reporting.
+
+We help API and Finance teams get API services invoiced. | |
+| Finance | Old invoicing tool adaptations to avoid double charging | 1 | Data team excludes from the invoicing clients migrated to the New Dash as these gets invoiced within the new setup to avoid double charging. | We avoid double invoicing. | |
+| Finance | Cancellation API invoicing | 2 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | Carry over from Q4 2024.
+Lack of clients reduced prioritisation on this subject. |
+| General support | Guest Products and Multi-Service Single-Payment support | 2 | Data team ensures old invoicing tool continuity after Stripe metadata changes.
+Data team grants support on transitioning towards Guest Products new setup.
+Data team ensures continuity on Guest Revenue reporting with the new single-payment setup. | We invoice timely and accurately with no manual work.
+We ensure continuity and consistent KPIs reporting. | |
+| General support | Data Captain Requests | 2 | Business as usual: serve ad-hoc requests from the wide business to support everything and everyone with Data | Our people get the insights they need for their work. | Max. dedication will likely decrease to 5h/week due to capacity reduction. |
+| Data Driven Decision Making | Account Managers Reporting Improvements | 2 | Data team improves and extends the capabilities around Account Management reporting by providing new, actionable insights.
+
+This includes improvements discussed with Matt and other revops colleagues. | We have greater visibility on the business performance of each account to better prioritise account management and sales effort. | At the moment we’re only tracking 3 main metrics, but we could go further in terms of AM understanding based on RevOps input (Claim Payouts, Gross Margin per Account, etc). |
+| Data Driven Decision Making | Support Guest Squad on Monitoring A/B tests | 2 | Data team monitors and analyses A/B test results within the Guest Journey scope. | We move towards a data-driven factual decision making rather than gut feeling. We’re able to quantify the impact a certain initiative has in terms of business impact before deciding if we want to roll it out to all Guest Journeys. | Monitoring infrastructure built during Q4 2024. Monitoring a reasonable amount of A/B tests should be low effort on Data side. |
+| Product Support | Exchange rates integration to the backend | 2 | The different tech components in SH have the capacity to use exchange rates wherever they are needed. | Needed to support development of application features. No exchange rates, no features, we miss their value. | Only manageable while we have Pablo with us |
+| Data Driven Decision Making | New Dash KPIs | 3 | Implement KPIs for the New Dash line of business. | Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+Provide owners with data and insights to drive improvements and initiatives in their domain. | Carry over from Q4 2024. Part of the work has already been done with dedicated reporting, but need to move forward in terms of KPIs. KPIs definition is already in place. |
+| Data Driven Decision Making | APIs KPIs | 3 | Implement KPIs for the APIs line of business. | Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+Provide owners with data and insights to drive improvements and initiatives in their domain. | Carry over from Q4 2024
+Part of the work has already been done with dedicated reporting, but need to move forward in terms of KPIs. KPIs definition is already in place. |
+| Data Driven Decision Making | Improve KPIs and reporting data quality | 3 | Data team improves the quality of the KPIs by ensuring metric robustness and better sources. | Better data-driven decision making by having more qualitative data. | Potential candidates as of EOY 24:
+- Ensure Hubspot deals are the source of truth
+- Guest Journey Completion metric
+- Cancelled Bookings metric |
+| Product Support | Revenue Share in Guest Journey | 3 | Data team enriches existing reporting to capture revenue shares for each Guest Product. | We have a proper understanding of the service adoption and money flow across Guest Products. | |
+| Product Support | Support monitoring and alerting | 3 | Data team grants support on business-related alerting and monitoring. | We are able to anticipate negative business impact as a result of faulty releases or misfunctions of tech components. | |
\ No newline at end of file
diff --git a/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md:Zone.Identifier b/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Q1 Data Scopes proposal 1570446ff9c9800d9063d448c71aeea1.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md b/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md
new file mode 100644
index 0000000..8030755
--- /dev/null
+++ b/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md
@@ -0,0 +1,72 @@
+# Q3 Data Achievements
+
+The official Data OKRs are located here: [Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+# **Q3 Recap: Achievements and Status Update**
+
+As we approach the end of Q3, we want to take a moment to reflect on our progress and key achievements over the past quarter. Below is a summary of the Data OKRs we set for this quarter, along with the current status and any pertinent comments.
+
+## **Enhance Finance Processes**
+
+- **Support invoicing improvements:**
+ - *Status:* Support has been granted, initiative to find a new invoicing tool is ongoing. Hyperline implementation is now planned for development in Q1 2025 in the Dash Squad.
+- **Prevent e-deposit manual invoicing:**
+ - *Status:* Accomplished. An exporter was built to facilitate e-deposit invoicing first, and a dedicated Power BI report was later created and shared with finance team to be autonomous on e-deposit invoicing.
+- **Prevent Guesty manual invoicing:**
+ - *Status:* Blocked. Progress is hindered due to unclear invoicing rules for Guesty.
+- **New automation and reporting possibilities with Xero integration in DWH:**
+ - *Status:* Delivered. Xero integration completed, with new reporting features added to ease finance work in the matter of invoicing, crediting and host resolutions. Additionally, Xero integration has been incorporated and used for KPIs on revenue and resolutions.
+
+## **Provide Business KPIs Reporting**
+
+- **Definition of main metrics, dimensions, timelines, and main converted metrics:**
+ - *Status:* Accomplished. A total of 43 new metrics are available within 3 different categories (dimensions). The metrics include invoiced fees, guest payments, bookings, guest journeys, deal lifecycle, listing lifecycle and host resolutions. The dimensions include a global overview, as well as host segmentations based on the number of listings booked in the past 12 months and the host billing country.
+- **New holistic KPI dashboard available:**
+ - *Status:* Completed. Holistic reporting has been made available early on and improved during the course of the quarter. The current status allows for a short-term visualisation approach (MTD) and long-term KPI exposition (Monthly Overview, Global Evolution over Time, Detail by Category). Lastly, an extensive Data Glossary is available, which provides further explanation on the metrics and categories definitions and any data quality issue to consider.
+- **Conduct a first assessment on product/area-specific metrics:**
+ - *Status:* First assessment on product-specific KPIs meeting has been conducted in September and refinement sessions are expected beginning of Q4 before proceeding to the implementation.
+
+## **Grant Continuous Support to Business Teams**
+
+- **Ad-hoc requests handling (capped at 10h/week):**
+ - *Status:* Support provided. 41 support requests completed from a total of 47 created over 3 months, helping more than 16 colleagues. A survey was also sent, achieving a 9.5/10 support score.
+- **New Pricing support:**
+ - *Status:* Support granted. Initiative is still ongoing at company level, and from Data team our implication has been providing support and understanding impacts as New Pricing is being applied to more clients.
+- **New Dashboard support:**
+ - *Status:* Support granted. New reporting for User adoption has been created for the MVP tracking. Reporting needs for future iterations as well as discussions with the development team have been handled and is currently in progress on Data side to track V2 performance as soon as it’s launched. Overall, the initiative is ongoing as well as our implication as New Dashboard phases move forward.
+- **Currency conversion automation/integration:**
+ - *Status:* Completed. A new in-house tool named Xexe has been created to retrieve and store the currency conversion into DWH from Xe.com. The currency conversion data has been propagated to existing and new reporting, thus achieving better comprehension on revenues and costs.
+- **New Screening API support:**
+ - *Status:* A new report has been created and it’s ready to track performance as soon as client adoption starts.
+- **New Resolution API support:**
+ - *Status:* A first discussion has happened beginning of September, and partial needs for Resolutions team have been mitigated by providing access to existing reporting. First data on New Resolution API and Resolution Center should be ready to be ingested after the official launch, and the reporting discussed will be expected for Q4.
+
+## **Improve Data Foundations**
+
+- **CosmosDB - API/Resolutions data integrated into DWH:**
+ - *Status:* New in-house ingestion tool named Anaxi has been created to read data from CosmosDB. E-deposit, Athena and Screening API data has been integrated to DWH, and both Athena and E-deposit existing reporting is now reading from DWH. Anaxi is a key component that will unlock future Data reporting initiatives, such as Resolutions, CIH and Cancellation API.
+- **Hubspot - CRM data integrated into DWH:**
+ - *Status:* The first ingestions of key Hubspot areas happened by late September and its contents are currently being discussed with Business Systems. This integration will allow better quality for deal-based information as well as unlock more reporting possibilities in the scope of Sales and Account Management.
+- **DBT docs availability:**
+ - *Status:* DBT docs are available in local for data team members, allowing to track model dependencies effectively. A wider production-based DBT docs is still pending, but not critical at all.
+
+## Beyond Planned Objectives
+
+Additional work that has been done or needs to be done that is not in the direct scope of the objectives we set at the end of June.
+
+### **Additional Work: Completed**
+
+- **Guest Scope:**
+ - Check-in Hero reporting: many improvements to track all nuances of Check-in Hero, to the point that it’s one of the most complete reports available. Apart from the CIH detailed tracking, it’s helping understanding the reporting possibilities on other scopes.
+ - Guest Satisfaction reporting: a new report is available containing the CSAT score tracking of our Guests over time.
+ - Guest Journey tracking: review with tech team on improvements to further increase analytical understanding. This includes support on the data quality of key events, such as Guest Journey Completion Date, as well as first discussions on guest allocation and tracking for A/B test purposes.
+- **Tax inclusiveness on Guest Payments:** a piece of work to ensure consistency of monetary and revenue figures across the different sources of data, specially against Finance figures. At this stage, all guest payments reporting is without taxes - except stated otherwise - in the different Power BI reports. There’s still some discrepancies on the host-takes-waiver format that is being discussed with the Finance team.
+- **KPIs at client level:** deal-based extension of the KPIs to track the performance of our clients. ****Integrated within the Main KPIs Power BI report, it contains most of the main KPIs, as well as information of the lifecycle of the client. Additionally, a month-by-month evolution and a deal comparison has been made available.
+- **Minimum listing fee invoicing:** the invoicing exporter has been improved to include the capability of invoicing a minimum listing fee, thus increasing the company revenue.
+- **Data quality improvements:** several data quality assessments have been conducted & automatic data tests have been implemented, going one step further on increasing data quality
+
+### **Additional Work: Pending**
+
+- **Cancellation API reporting/invoicing support.**
+- **Check-in Hero API reporting/invoicing support.**
+- **E-deposit Cosmos DB migration**: a technical migration of the Cosmos DB that will effectively split Athena vs. E-deposits records is going to happen in the following weeks. In the meantime, the migration and necessary adaptation of models at Data side is planned to avoid downtime.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md:Zone.Identifier b/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Q3 Data Achievements 1130446ff9c9800e84e4f03750b752a1.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md b/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md
new file mode 100644
index 0000000..38ba317
--- /dev/null
+++ b/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md
@@ -0,0 +1,118 @@
+# Q3 OKRs drafting
+
+This page aims to provide a work in progress place to draft OKRs and achievements.
+
+The official Data OKRs are located here: [Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+| Scope | Objective | Status/comment | Actions |
+| --- | --- | --- | --- |
+| Enhance Finance Processes | Support invoicing improvements | Support granted, initiative is still ongoing | |
+| Enhance Finance Processes | Prevent e-deposit manual invoicing | Accomplished, we’ve built an exporter to provide e-deposit invoicing | |
+| Enhance Finance Processes | Prevent Guesty manual invoicing | Blocked - unclear invoicing rules that apply to Guesty | |
+| Enhance Finance Processes | New automation and reporting possibilities with Xero integration in DWH | Xero integration delivered, as well as new reporting to ease finance work + inclusion of this data for KPIs | |
+| Provide Business KPIs Reporting | Definition of main metrics, dimensions, timelines and main converted metrics | Completed | |
+| Provide Business KPIs Reporting | New holistic KPI dashboard available | Completed | |
+| Provide Business KPIs Reporting | Conduct a first assessment on product/area specific metrics | To be handled during September | ~~Uri to handle a couple of 30 min sessions with:
+- Old dash/New dash
+- Guests
+- APIs
+- Resolutions~~ |
+| Grant Continuous Support to Business Teams | Ad-hoc requests handling (capped 10h/week)
+
+ | Support has been granted - 41 support requests completed over a total of 47 over 3 months. | - Uri to check out microsoft forms
+- Retrieve a couple of nice examples :D |
+| Grant Continuous Support to Business Teams | New Pricing support | Support has been granted | |
+| Grant Continuous Support to Business Teams | New Dashboard support | Support has been granted. Reporting available. | |
+| Grant Continuous Support to Business Teams | Currency conversion automation/integration | Completed | |
+| Grant Continuous Support to Business Teams | New Screening API support | Reporting ready | |
+| Grant Continuous Support to Business Teams | New Resolution API support | Pending, no information yet. We have scheduled a meeting to discuss it | ~~- To check with Lou A~~
+Meeting scheduled Monday 2nd September |
+| Improve Data Foundations | CosmosDB - API/Resolutions data integrated into DWH | New ingestion tool created. Critical data for reporting/invoicing available within DWH | |
+| Improve Data Foundations | Hubspot - CRM data integrated into DWH | Expected September | |
+| Improve Data Foundations | DBT docs availability | Expected September | |
+
+Additional work that has been done:
+
+- Guest scope
+ - Check-in Hero reporting
+ - Guest Satisfaction reporting
+ - Guest Journey tracking
+- Tax inclusiveness on Guest Payments
+- KPIs at client level (deal-based)
+- Minimum listing fee invoicing
+- Several data quality assessments & automatic data tests
+
+Additional work that has to be done:
+
+- Cancellation API reporting/invoicing support
+- Check in Hero API reporting/invoicing support
+
+WIP:
+
+# **Q3 Recap: Achievements and Status Update**
+
+As we approach the end of Q3, we want to take a moment to reflect on our progress and key achievements over the past quarter. Below is a summary of the Data OKRs we set for this quarter, along with the current status and any pertinent comments.
+
+## **Enhance Finance Processes**
+
+- **Support invoicing improvements:**
+ - *Status:* Support granted, initiative to find a new invoicing tool is ongoing.
+- **Prevent e-deposit manual invoicing:**
+ - *Status:* Accomplished. An exporter has been built to facilitate e-deposit invoicing.
+- **Prevent Guesty manual invoicing:**
+ - *Status:* Blocked. Progress is hindered due to unclear invoicing rules for Guesty.
+- **New automation and reporting possibilities with Xero integration in DWH:**
+ - *Status:* Delivered. Xero integration completed, with new reporting features added to ease finance work and incorporate data for KPIs.
+
+## **Provide Business KPIs Reporting**
+
+- **Definition of main metrics, dimensions, timelines, and main converted metrics:**
+ - *Status:* Accomplished. A total of 43 new metrics are available within 3 different dimensions.
+- **New holistic KPI dashboard available:**
+ - *Status:* Completed. Holistic reporting with a short-term and long-term KPI exposition and an extensive Data Glossary.
+- **Conduct a first assessment on product/area-specific metrics:**
+ - *Status:* Product-specific KPIs meeting scheduled for September.
+
+## **Grant Continuous Support to Business Teams**
+
+- **Ad-hoc requests handling (capped at 10h/week):**
+ - *Status:* Support provided. 41 support requests completed from a total of 47 created over 3 months, helping more than 16 colleagues.
+- **New Pricing support:**
+ - *Status:* Support granted. Initiative is ongoing as well as our implication as New Pricing is being applied to more clients.
+- **New Dashboard support:**
+ - *Status:* Support granted. New reporting for User adoption is available. Initiative is ongoing as well as our implication as New Dashboard phases move forward.
+- **Currency conversion automation/integration:**
+ - *Status:* Completed. A new tool named Xexe has been created to retrieve and store the currency conversion into DWH, propagating it to existing and new reporting.
+- **New Screening API support:**
+ - *Status:* Reporting is ready to track performance.
+- **New Resolution API support:**
+ - *Status:* First data ready to be ingested and minimal reporting discussed and expected to be ready by EOQ.
+
+## **Improve Data Foundations**
+
+- **CosmosDB - API/Resolutions data integrated into DWH:**
+ - *Status:* New ingestion tool named Anaxi has been created. Critical data for reporting/invoicing is now available in DWH, unlocking future Data initiatives.
+- **Hubspot - CRM data integrated into DWH:**
+ - *Status:* Expected completion in September.
+- **DBT docs availability:**
+ - *Status:* Expected completion in September.
+
+## Beyond Planned Objectives
+
+Additional work that has been done or needs to be done that is not in the direct scope of the objectives we set at the end of June.
+
+### **Additional Work: Completed**
+
+- **Guest Scope:**
+ - Check-in Hero reporting: many improvements to track all nuances of Check-in Hero.
+ - Guest Satisfaction reporting: CSAT score tracking of our Guests
+ - Guest Journey tracking: review with tech team on improvements to further increase analytical understanding.
+- **Tax inclusiveness on Guest Payments:** ensuring consistency of monetary and revenue figures across the different sources of data.
+- **KPIs at client level:** deal-based extension of the KPIs to track the performance of our clients. ****
+- **Minimum listing fee invoicing:** modifying the invoicing exporter to include the capability of invoicing a minimum listing fee, thus increasing revenue.
+- **Data quality improvements:** several data quality assessments have been conducted & automatic data tests have been implemented, going one step further on increasing data quality
+
+### **Additional Work: Pending**
+
+- **Cancellation API reporting/invoicing support.**
+- **Check-in Hero API reporting/invoicing support.**
\ No newline at end of file
diff --git a/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md:Zone.Identifier b/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Q3 OKRs drafting 33c62b60320849acbb01925a01f7a383.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md b/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md
new file mode 100644
index 0000000..5ba56ec
--- /dev/null
+++ b/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md
@@ -0,0 +1,170 @@
+# Q4 Data Achievements
+
+The official Data OKRs are located here: [Data OKRs](https://www.notion.so/299e4da6e92043899646d11609c051ae?pvs=21)
+
+# **Q4 Recap: Achievements and Status Update**
+
+As we approach the end of Q4, we want to take a moment to reflect on our progress and key achievements over the past quarter. Below is a summary of the Data OKRs we set for this quarter, along with the current status and any pertinent comments.
+
+## Support Finance Team With Invoicing Initiatives
+
+- **P1: Support during accounting/invoicing tooling changes**
+ - Status: **Achieved**
+ - Comments: Pablo has been supporting the invoicing tooling changes to Hyperline as a representative of the Data team.
+- **P1: Cancellation API invoicing support**
+ - Status: **Not Started**
+ - Comments: Lack of clients in Cancellation API have encouraged Data Team dedicating effort pragmatically into other areas with greater impact.
+- **P1: Check-in Hero API invoicing support**
+ - Status: **Achieved**
+ - Comments: With the first client expected to appear by beg. December, we’ve created a first report available in API Reports Power BI Application that contains an Overview and the Details of the CheckIn Hero records.
+- **P1: Screening API invoicing support**
+ - Status: **In Progress**
+ - Comments: Lack of clients in Screening API have encouraged Data Team dedicating effort pragmatically into other areas with greater impact. By the EOY, Data Team has started working on a report for S&P to be ready for the next invoicing cycle.
+- **P1: New Dash invoicing support**
+ - Status: **In Progress**
+ - Comments: Even though Data Team is not expected to invoice users in the New Dash, the fact that some of these come from Old Dash requires modifications on the current Invoicing Exporter tool to avoid double-charging clients. Final requirements have not been in place before EOY, but it’s likely New Dash users get invoiced by January.
+- **P1: Guesty invoicing support**
+ - Status: **Delayed for future Qs**
+ - Comments: Similarly as Q3, lack of a proper stable set-up to invoice Guesty has blocked the capacity to automate its invoicing.
+
+## **Provide Business KPIs Reporting**
+
+- **P2: New Dash KPIs and Reporting**
+ - Status: **Partially Achieved**
+ - Comments: Further improvements on the Reporting of the New Dash have been made iteratively as new releases have been deployed. KPIs needs for New Dash are gathered and will start to be implemented once the main features of New Dash have been fully delivered, in order to prioritise the main New Dash Reporting first. Carried over to Q1 25.
+- **P2: APIs KPIs and Reporting (Cancellation, Check-in Hero)**
+ - Status: **Partially Achieved**
+ - Comments: A new report for Screen and Protect has been created. Additionally, we’re expecting to have Check-in Hero reporting to be available in the coming days. No usage of Cancellation APIs has led us to prioritise other subjects. KPIs needs for APIs are gathered and will start to be implemented in the coming weeks. Carried over to Q1 25.
+- **P2: Resolutions KPIs and Reporting**
+ - Status: **Delayed for future Qs**
+ - Comments: The lack of ownership on Resolutions product has lead to uncertainty on this subject and blocking us to move forward with the integration of Resolutions data and further reporting. However, the requirements for the KPIs are gathered and should be ok to re-launch the initiative once we have more clarity on this subject.
+- **P2: Guest Journey KPIs and Reporting**
+ - Status: **Achieved**
+ - Comments: A brand new KPIs report to track the Guest Journey has been created, allowing for much dedicated detail on this product line.
+
+## **Support Product Team With New Products & Data-Driven Features**
+
+- **P1: Run First A/B test to drive Guest Journey experience decisions**
+ - Status: **Achieved**
+ - Comments: A major collaboration with Guest Squad, we managed to implement and validate a proper set-up for A/B testing purposes. An initial A/A test has already been conducted to ensure the technical implementation is correct from both Engineering & Data perspective. A/B test released beg. December and aiming to be finished by mid January.
+- **P2: Support New Pricing transition**
+ - Status: **Achieved**
+ - Comments: Support granted to the New Pricing transition, with a great amount of modelling happening within DWH to adjust to the new backend structure. Also, several support has come through Data Requests and additional ad-hoc analysis on client pricing before and after an expected migration.
+- **P2: Support New Dashboard transition**
+ - Status: **Achieved**
+ - Comments: Support granted to the New Dash transition, collaborating with the Dash squad to provide reporting on the New Dash. While there’s some improvements that could be done in the reporting areas, the most important areas are tracked and we continuously iterate over it.
+- **P2: Provide support for data-driven features in apps**
+ - Status: **Not Started**
+ - Comments: Lack of ownership in the initiatives for part of the interested products has resulted in no action being taken by Data Team towards this initiative. Even though we acknowledge the value it can provide to our clients, we expect further refinement on the need to clarify the scope and viability of this project.
+
+## **Improve Data Capabilities**
+
+- **P1: Hire a Data Engineer (pending validation)**
+ - Status: **Delayed for future Qs**
+ - Comments: No final go to hiring within 2024 Q4. This puts pressure in the Data Team specially in Q1 with the lack of Data Engineering capacity and will reduce overall capabilities. Expecting to re-launch initiative in the following months.
+- **P1: Software upgrades (dbt, airbyte)**
+ - Status: **Delayed for future Qs**
+ - Comments: No blocking point to move forward on these subjects, upgrade still pending.
+- **P2: Orchestration engine deployment**
+ - Status: **Delayed for future Qs**
+ - Comments: A first assessment has been conducted with Dagster. However, due to the lack of Data Engineering capacity in the following months, we decided to avoid making drastic infrastructure changes to avoid potential downtimes. To be retaken likely in 2025 Q2.
+- **P3: Perform assessment of visualization tools (PBI alternatives)**
+ - Status: **Delayed for future Qs**
+ - Comments: The decision of Orchestration Engine will likely affect the Visualization Tool assessment. We decided to pragmatically focus on more priority subjects due to the lack of capacity to launch this initiative. To be retaken likely in 2025 Q2.
+
+## Extend Data Accessibility Capabilities
+
+- **P3: Start Domain Analysts program**
+ - Status: **Achieved**
+ - Comments: Jamie D. and Alex A. successfully trained in SQL and with access to the DWH. Training plan proceeds accordingly.
+- **P3: Power BI in-house training**
+ - Status: **Achieved**
+ - Comments: An in-house training session has been conducted before EOY. Additionally, reference documentation has been created.
+
+## Grant Continuous Support To Business Teams
+
+- **P1: Ad-hoc requests handling (Data Captain Requests, capped 10h/week)**
+ - Status: **Achieved**
+ - Comments: A total of 39 data support tickets have been handled in the period of Q4 2024.
+- **P2: AM reporting with churn detection capabilities**
+ - Status: **Achieved**
+ - Comments: A new Account Managers report has been created with client aggregated data with a monthly categorisation of how the account is performing, being widely used in the RevOps team. Additionally, 3 new Churn Rate metrics have been made available in Main KPIs.
+
+# Beyond Planned Objectives
+
+Additional work that has been done or needs to be done that is not in the direct scope of the objectives we set at the end of September. The following list aims to exemplify the kind of unplanned work that we are handling in our day-to-day, grouped by domain:
+
+## Incidents
+
+- **Invoicing incident November 2024**
+
+ We faced a massive invoicing incident on the export of November. At this stage it has been resolved, with December exports running ok, but the criticality of the subject needed further implementation on the Invoicing Exporter tool that Data Team manages. All details on the incident can be found here:
+
+ [20241104-01 - Booking invoicing incident due to bulk UpdatedDate change](20241104-01%20-%20Booking%20invoicing%20incident%20due%20to%20bu%2082f0fde01b83440e8b2d2bd6839d7c77.md)
+
+- **Check-in Hero price duplication incident November 2024**
+
+ A second incident affecting Check-In Hero reporting put some unplanned effort in order to fix it, with a root cause being very similar to a previously experienced incident. You can check the details here:
+
+ [20241119-01 - CheckIn Cover multi-price problem (again)](20241119-01%20-%20CheckIn%20Cover%20multi-price%20problem%20(a%201430446ff9c98088b547dfb0baff6024.md)
+
+
+## TMT
+
+- **Analyse claims patterns for Guesty to support negotiation**
+
+ During October, the Data provide an in-depth analysis on the claim payouts for Athena (Guesty) and had shown that these were quite similar to the revenue, concluding that we were not making money from Guesty. Most of the payouts came from 5 partners that accounted for far less volume of protected bookings.
+
+- **Estimated MRR for new onboarded deals**
+
+ In order to support investor’s needs for insights, we did some support in order to estimate the revenue a new client will likely have in average per month depending on the number of listings they manage. After a first extraction and discussion about the accuracy of the figures obtained, a second deeper analysis has been conducted in order to try several approaches to settle a definition of a new metric that will need to be shown in Main KPIs. The details of the analysis are available here:
+
+ [Onboarding MRR Definition](https://www.notion.so/Onboarding-MRR-Definition-f1bada4ea5b942568d5c6b2c7917fc5c?pvs=21)
+
+- **ICE score support to prioritise initiatives**
+
+ In order to help prioritising initiatives in a more accurate manner, the Data team has invested some time around December on helping Product teams on how to settle a good scoring system for Impact and Confidence, to be ready to use by beginning 2025.
+
+
+## Dashboard
+
+- **Data inconsistency issues on New Dash tables**
+
+ We’ve faced many data quality issues since the launch of New Dash and the resolution of these have not been always prioritsed, leading to data alerts being raised consecutively for many days, such as in the case of BookingToProductBundle issues. Lately, we’ve encountered some additional inconsistencies within BookingViewToService and further logic that effectively reduces our capacity to provide accurate reporting in a timely manner.
+
+- **Removal of already existing backend fields**
+
+ We’ve had some data removals on existing tables, such as Accommodation and SuperhogUser. In order to avoid breaking any report or DWH logic we needed to remove the dependency and any existing modelling in our side.
+
+- **Adaptations on New Dash user tracking through Claims**
+
+ During the reporting of New Dash initiative, there has been several evolutions on how to track which user is in New Dash and since when. Due to the fact that the report has been existing since a few weeks after the MVP was released, we needed to iteratively modify the logic of user tracking many times to ensure there was no report downtime.
+
+
+## Guest Journey
+
+- **Address Validation Removal**
+
+ As part of the initiative to remove Address Validation coming from Guest Squad, we were forced to do some adaptations to the Power BI reporting of Check-In Hero, as well as remove any modelling and dependency on Address Validation within DWH.
+
+- **Planning for Guest Products & Multi-service single payment**
+
+ The new Q4 initiative on Guest Products will require adaptations on the modelling on DWH once available. Additionally, this new initiative raises the point on allowing one payment for many services, which is currently not the case. Even though this has not been released yet in Guest Squad side, we’ve conducted several sessions with the Guest Squad to align on the architecture of the final solution.
+
+
+## APIs
+
+- **Athena Cosmos DB migration (Guesty)**
+
+ Similarly as we did for E-deposit, we needed to modify our Cosmos DB ingesting tool in order to point towards the new Cosmos container that contains Guesty verifications. This has been done beg. December with no further impacts on the reporting.
+
+
+## Engineering
+
+- **Currency rates architecture design**
+
+ As part of a new need from Engineering to have the currency exchange rates, we had some discussions on possible designs to share currency rates among systems while keeping the same price plan with Xe.com. We’re expected to provide this information from DWH to the Superhog backend for further use, with the development planned for beg. Q1 2025.
+
+- **Networking issues to access MS SQL**
+
+ By the end of Q4, we faced some issues Data team wide to be able to access the MS SQL, namely, the backend. While this is not absolutely critical for our systems, it makes it impossible for Data team to read directly from the backend, reducing the efficiency in initiatives such as New Dash. Resolving the issue requires in-depth Data Engineering skills, thus highly prioritised above other lines of work to remove the block.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md:Zone.Identifier b/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Q4 Data Achievements 1570446ff9c980b0a094ccfc9533bee4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md b/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md
new file mode 100644
index 0000000..6276cde
--- /dev/null
+++ b/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md
@@ -0,0 +1,81 @@
+# Q4 Data Scopes proposal
+
+With Q3’24 coming to an end, it’s time to plan our priorities and goals for Q4.
+
+This document outlines our proposal for next quarter. It’s a working document to support our TMT alignment on 2024-09-12.
+
+**2024-09-12 Update:** the proposal was discussed with the TMT, and the document below is the plan for next quarter.
+
+## Executive Summary
+
+During Q4 we want to:
+
+- Expand our KPIs and data-driven decision making to new areas of business: New Dashboard, Cancellation API, CIH API, Screening API, Resolutions
+- Keep supporting Finance to keep processes accurate and efficient in areas like the invoicing of new areas, the adoption of new finance tools.
+- Kickstart our capabilities around data-driven product design through a first A/B testing project in our Guest Journey.
+- Keep Data team well oiled and productive by adding an additional data engineer to the team, maintaining and expanding our Data Platform, and onboarding colleagues from other teams into our DWH as Domain Analysts.
+
+## Main body
+
+You can find below all the scopes we want to set for ourselves as goals for the next quarter. Our ambition is to complete all of them, but we have still bucketed them in three priority levels, with 1 being the highest and 3 being the lowest.
+
+| Area | Item | Priority | Definition of Done | Value obtained | Comments |
+| --- | --- | --- | --- | --- | --- |
+| **Data Capabilities** | **Hire a Data Engineer** | 1 | Hire and successfully onboard a mid-senior Data Engineer. | - Remove Pablo as a single point of failure.
+- Prevent freeze on early 2025.
+- Ensure sufficient capacity to keep delivering long-term. | The Go on this hire is subject to Humphrey’s approval, currently on hold. |
+| **Data Capabilities** | **Software upgrades:
+- dbt
+- airbyte** | 1 | Upgrade our versions for `dbt` and `airbyte` . | Keeping the lights on in our data infrastructure, AKA, reports work each morning. | Boring but necessary. |
+| **General support** | **Data Captain Requests** | 1 | Business as usual: serve adhoc requests from the wide business to support everything and everyone with Data. | Our people get the insights they need for their work. | Ongoing survey to gather feedback on the current process |
+| **Finance** | **Support during accounting/invoicing tooling changes (Maxio/Hyperline/Xero, etc).** | 1 | Data team provides guidance on vendor selection.
+Data team maintains and grows invoicing pipelines around any new tooling as required. | We can invoice properly + have visibility and reporting on your invoices (what, how much, when, to who). | Ambiguous since we are still selecting a vendor. Lots will depend on vendor selection and implementation. |
+| **Finance** | **Cancellation API Invoicing Support** | 1 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | Best delivery method will depend on Maxio/Hyperline/Xero line of work. |
+| **Finance** | **CIH API Invoicing Support** | 1 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | Best delivery method will depend on Maxio/Hyperline/Xero line of work. |
+| **Finance** | **Screening API Invoicing Support** | 1 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | Best delivery method will depend on Maxio/Hyperline/Xero line of work. |
+| **Finance** | **New Dash Invoicing Support** | 1 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | Best delivery method will depend on Maxio/Hyperline/Xero line of work. |
+| **Finance** | **Guesty Invoicing Support** | 1 | Automate the extraction of raw data and computing of Line items, deliver to Finance team for invoice generation. | We invoice timely and accurately with no manual work. | This item depends completely on the invoicing agreements with Guesty becoming stable, which didn’t happen in Q3.
+Best delivery method will depend on Maxio/Hyperline/Xero line of work. |
+| **Data Driven Account Management** | **Develop churn tracking and prevention reports** | 1 | Develop reporting capabilities to help Account Managers spot and address deals that are in high risk of churning. | - Reduce churn.
+- Make Account Management team more independent and efficient. | |
+| **Product** | **Run first A/B tests to drive Guest Journey experience decisions** | 1 | Deliver at least one complete experiment to drive improvements in the Guest Journey business performance (conversion, revenue, guest satisfaction, etc). | - Direct value on Guest Journey line of business.
+- First step to mature a wider A/B testing capability within Superhog, which can deliver value regularly in New Dash and Guest Journey. | |
+| **Data Capabilities** | **Orchestration engine deployment** | 2 | Select and deploy an orchestration engine in our data infrastructure. | - Enables moving data faster/from more places/with less errors/better monitoring.
+- Reduces human effort on data pipelines management, which means we will be able to have more of them given our limited resources. | It’s a highly technical topic, yet it’s important. Not having it risks our delivery freezing in early 2025. |
+| **General support** | **Support New Pricing** | 2 | Continue providing support in the new pricing design and implementation | Pricing decisions and actions are supported by Data. | Rather open-ended, but we expect to have a part to play on this and want to allocate capacity. |
+| **General support** | **Support New Dashboard transition** | 2 | Continue providing support in the transition of users from old to new dash, as well as in sunsetting old dash areas like old invoicing. | Help preventing revenue and client threating mistakes and errors during the transition. | Rather open-ended, but we expect to have a part to play on this and want to allocate capacity. |
+| **Product** | **Provide support for data-driven features in apps** | 2 | Support PMs and engineering squads in delivering data-intensive features within customer facing products. | Features should improve business performance by improving retention, upselling, sales, etc. | We envision this item being led by PMs, with Data team supporting. |
+| **Product** | **New Dash KPIs and Reporting** | 2 | Thoughtfully define and implement KPIs for the New Dash line of business. | - Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+- Provide owners with data and insights to drive improvements and initiatives in their domain. | Pace and delivery are highly dependent on the rollout plans of the services themselves. |
+| **Product** | **Cancellation API KPIs and Reporting** | 2 | Thoughtfully define and implement KPIs for the Cancellation API line of business. | - Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+- Provide owners with data and insights to drive improvements and initiatives in their domain. | Pace and delivery are highly dependent on the rollout plans of the services themselves. |
+| **Product** | **CIH API KPIs and Reporting** | 2 | Thoughtfully define and implement KPIs for the CIH API line of business. | - Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+- Provide owners with data and insights to drive improvements and initiatives in their domain. | Pace and delivery are highly dependent on the rollout plans of the services themselves. |
+| **Product** | **Guest Journey and Services KPIs and Reporting** | 2 | Thoughtfully define and implement KPIs for the Guest Journey line of business. | - Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+- Provide owners with data and insights to drive improvements and initiatives in their domain. | |
+| **Product** | **Resolutions KPIs and Reporting** | 2 | Thoughtfully define and implement KPIs for the Resolutions line of business. | - Allow owners and the wider business to have a common view on how to measure success in the area, and automate the measurement.
+- Provide owners with data and insights to drive improvements and initiatives in their domain. | Pace and delivery are highly dependent on the rollout plans of the services themselves. |
+| **Product** | **athena/e-deposit reporting V2** | 3 | Review existing reporting in Athena/e-deposit and perform upgrades | - Improve visibility for owners in the area and overcome existing shortcomings. | |
+| **Data Capabilities** | **Start Domain Analysts (Alex A. and Jamie D.) programme** | 3 | Provide Alex. A and Jamie D. with skills and access to DWH to leverage it and improve as domain go-to people | - Decentralize the Data function to gain speed and knowledge and reduce bottlenecks and red tape
+- Nurture the great talent we have in-house | Matt and Suzannah buy-in. |
+| **Data Capabilities** | **Power BI in-house training** | 3 | Deliver regular PBI usage training to report consumers. | - Improve data-driven attitudes and usage of our tools
+- Spread knowledge about available Data products within Superhog | |
+| **Data Capabilities** | **Perform assessment of visualization tools (alternatives to PBI)** | 3 | Research and assess available visualization tools in the market, compare them to PBI, judge and decide whether it makes sense to switch | We think alternative tools in the market could:
+- Improve efficiency and speed in delivering reports and dashboards
+- Scale better with the size of the business, opening up report building to more profiles in our company
+- Provide features lacking in PBI | This has been in our radar since Data team’s day one, but was parked due to priorities. We feel now is a good time to give it a shot. |
+
+## Backups
+
+Our goals for Q4 are ambitious and it won’t be easy to deliver everything, specially given the large batch of new services being launched.
+
+But, if somehow we manage to chew through all of it, these are the topics that we would propose chasing next.
+
+| Area | Item | Definition of Done | Value obtained | Comments |
+| --- | --- | --- | --- | --- |
+| Data Capabilities | Open up `dbt test` -ing to data source owners | Enable other engineering squads to rely on our Data Testing capabilities in the DWH to monitor metrics and set up automated alerts on Data Quality and out-of-the usual values for operational metrics. | Squads can leverage the Data Platform to monitor incidents automatically, reducing our time to respond to technical and business incidents. | |
+| Data Capabilities | Mixpanel integration into DWH | Integrate Mixpanel so that all the events that we register through it are accessible in our DWH. | Ability to produce reports that mix info from Mixpanel and other sources, like our own application. Would enable insights around Guest Journey and New Dashboard usage. | |
+| Data Capabilities | Automated services monitoring | Integrate all the services in the Data Platform with a monitoring tool like Uptime Kuma. | Respond faster to incidents in our platform and automate the delivery of warnings. Track the uptime of our services to measure how well we are doing. | |
+| Data Capabilities | Excel training | Deliver, either internally or outsourcing, Excel training for the wider business. | Improve the skills of our people to perform their own analysis independently, or better leverage data shared by the Data team. | |
+| RevOps | Automated reporting on soon-to-churn/large downsizes of customers to drive preventive action | Deliver reporting to automatically track significant churning trends in our customer base and weighting them by value at risk. | Speed in responding and preventing full-blown churns. Efficient use of our AM resources by prioritising the most valuable accounts. | |
+| RevOps | Run LTV by customer segments to drive acquisition/churn prevention efforts | Automate the measurement of Lifetime Value (LTV) to drive prioritization of CRM actions towards customers: marketing expenses, churn-prevention efforts, upselling campaings, etc. | Improve the efficiency of our AM and Marketing resources by targeting the most profitable/costly parts of our customer base. | |
\ No newline at end of file
diff --git a/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md:Zone.Identifier b/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Q4 Data Scopes proposal 75bf38ab8092471d910840ab86b0ec60.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md b/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md
new file mode 100644
index 0000000..975d4ee
--- /dev/null
+++ b/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md
@@ -0,0 +1,9 @@
+# Quarterlies
+
+[2024Q3](2024Q3%20ff7f97af85744bb4bf9a1c1f679ac50a.md)
+
+[2024Q4](2024Q4%206420ae68694f4f86ab69bdce3b2dfa24.md)
+
+[2025Q1](2025Q1%201570446ff9c980dea9cbf31bb603e09e.md)
+
+[2025Q3](2025Q3%202100446ff9c980c8b55ae57d39836c07.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md:Zone.Identifier b/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Quarterlies 959c63c9ec3641ab928840488f852b8e.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md b/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md
new file mode 100644
index 0000000..cc40361
--- /dev/null
+++ b/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md
@@ -0,0 +1,149 @@
+# Refactoring Business KPIs - 2024-07-05
+
+**2024-07-05**
+
+# Current status
+
+In over a month, the amount of KPIs that we’re providing as Data Team has grown considerably. This goes together with an increase of the dbt modelisation to support the final display in the power bi report.
+
+The current approach is the following:
+
+- Each metric defined in [Reporting Needs ](https://www.notion.so/Reporting-Needs-afaf4d5384764023a246d6cf7de201b4?pvs=21), gets grouped into a model with the same typology. For example, `Created Bookings`, `Cancelled Bookings` and `Checkout Bookings` are metrics associated to `Bookings`. There’s some more advanced weighted measures, such as `Guest Journey Start Rate`, that in this case it is included in the `Guest Journey`.
+- For each eligible metric, we currently have **2 views**:
+ - A **global** view, which means is not granularized. It contains the prefix `mtd`. This one is presented in 2 formats:
+ - For historical months, we retrieve the metric in a monthly basis. This is effectively the same as considering the end of the month in a month-to-date basis.
+ - For the current month, we retrieve the information using a month-to-date approach. This means, if today is 4th of July 2024, we get the information from the 1st of July 2024 to the 3rd of July 2024 and compare with the same time period from the previous year, meaning 1st of July 2023 to 3rd July 2023. During the current month, we are able to retrieve this comparison for all days that have already of the current month. This means that apart from the 3rd of July 2024 comparison, we will also have the 1st and the 2nd. This gets cleaned up every month.
+ - Structure:
+ - The **global** structure is the following:
+
+ ```sql
+ -- Retrieval computation
+ some_metrics as (
+ select
+ d.date,
+ count/sum/etc as aggregated_metric
+ from int_dates_mtd d
+ inner join some_table
+ on something
+ where condition
+ group by 1
+ ),
+ --
+ -- Repeat for each different date logic computation
+ --
+ ),
+ main_kpis as (
+ -- Final aggregation of subqueries --
+ select
+ d.year,
+ d.month,
+ d.day,
+ d.date,
+ d.is_end_of_month,
+ d.is_current_month,
+ some_metric.aggregated_metric
+ -- Repeat for other metrics
+ from int_dates_mtd d
+ left join some_metrics on some_metrics.date = d.date
+ -- Repeat for other CTEs
+
+ )
+ -- **Pivoting** to get previous year for each line & computing relative increment
+ -- (rel_incr) --
+ select
+ a.year,
+ a.month,
+ a.day,
+ a.is_end_of_month,
+ a.is_current_month,
+ a.date,
+ b.date as previous_year_date,
+
+ a.aggregated_metric,
+
+ b.aggregated_metric as previous_year_aggregated_metric,
+
+ cast(a.aggregated_metric as decimal) / b.aggregated_metric
+ - 1 as relative_increment_aggregated_metric,
+
+ -- Repeat triple structure for any other metric
+
+ from main_kpi a
+ left join main_kpi b on a.month = b.month and a.year = b.year + 1
+ where (a.is_end_of_month = 1 or (a.is_current_month = 1 and a.day = b.day))
+ ```
+
+ - A **by deal** view, which extracts the information for each deal id as long as the deal is informed in the users table, and as long as we can trace that metric to a specific deal. This means, for instance, if we get some Guest Journey Completed but the Host User is not set, we won’t be able to attribute it. These model contain a `monthly` and `by_deal` prefixes and suffixes respectively.
+ - Structure:
+ - The **by deal** structure is the following:
+
+ ```sql
+ -- Retrieval computation
+ some_metrics as (
+ select
+ date_trunc('month', a_date_field)::date as first_day_month,
+ **id_deal,**
+ count/sum/etc as aggregated_metric
+ from a
+ inner join b
+ on something
+ where condition
+ group by 1**, 2**
+ -- Note that here we do NOT join with int_dates_by_deal because
+ -- 1) we don't compute MTD so it's not needed
+ -- 2) it heavily increases performance
+ ),
+ --
+ -- Repeat for each different date logic computation
+ --
+ )
+ -- Final aggregation of subqueries --
+ select
+ d.year,
+ d.month,
+ d.day,
+ d.date,
+ **d.id_deal**,
+ some_metric.aggregated_metric
+ -- Repeat for other metrics
+ from int_dates_**by_deal** d
+ left join some_metrics
+ on some_metrics.**first_day_month** = d.**first_day_month**
+ **and some_metrics.id_deal = d.id_deal**
+ -- Repeat for other CTEs
+
+ -- Note that here all data is at end of month or yesterday
+ -- There's no relative increase vs. last year computation!
+ ```
+
+- Since we were only sourcing core models so far, all KPIs models are placed into core, with the usual `int_core__` convention for intermediate and `core__` for reporting.
+
+
+
+# Limitations
+
+- The global view directly computes the `Value`, `Previous Year Value` and `Relative Increment Value` within the same typology model. This means that, the `Bookings Created`, `Previous Year Created Bookings` and `Relative Increment Bookings Created` are handled within the `Booking` model.
+ - This worked well so far, because all metrics that correspond to rates were self-contained in the same group. For example, `Guest Journey Completion Rate` uses `Guest Journey Completed` and `Guest Journey Started`, that are within the `Guest Journey` model.
+ - But it does not work well for the new metrics we aim to implement, for instance, `Revenue per Booking`, since `Revenue` will probably be an aggregation of a `Guest Revenue` model, an `Operator Revenue` model, and an already existing `Booking` model…
+ - **Solution** → The pivoting logic to retrieve previous year should be abstracted into a superior layer
+- The power bi report is sourcing directly a `core__mtd_aggregated_metrics` and a `core__monthly_aggregated_metrics_history_by_deal` models.
+ - This worked well so far, because all metrics were coming from Core.
+ - But it does not work well for the new metrics we aim to implement, for instance, Host Resolution Amount Paid, since this will come from Xero. The Resolution model could still be called int_xero__ABC but the aggregation would not only contain information from Core
+ - **Solution** → The aggregated metrics models and the exposures should migrate to Cross
+
+# Strategy
+
+- Keep the reporting always **alive** by creating many but small changes to the code
+- Track progress and details in this Notion page
+
+**Step-by-step details:**
+
+1. Remove previous year comparison logic from `int_core__mtd_booking_metrics` and create an abstraction layer in `int_mtd_vs_previous_year_metrics` model that includes the bookings. Make `int_core__aggregated_metrics` read Bookings metrics from this table
+2. Remove previous year comparison logic from `int_core__mtd_accommodation_metrics` and add it into the `int_mtd_vs_previous_year_metrics` model. Make `int_core__aggregated_metrics` read Accommodation metrics from this table
+3. Remove previous year comparison logic from `int_core__mtd_deal_metrics` and add it into the `int_mtd_vs_previous_year_metrics` model. Make `int_core__aggregated_metrics` read Deal metrics from this table
+4. Remove previous year comparison logic from `int_core__mtd_guest_journey_metrics` and add it into the `int_mtd_vs_previous_year_metrics` model. Make `int_core__aggregated_metrics` read Guest Journey metrics from this table
+5. Remove previous year comparison logic from `int_mtd_guest_revenue_metrics` and add it into the `int_mtd_vs_previous_year_metrics` model. Make `int_core__aggregated_metrics` read Guest Revenue metrics from this table
+6. Transition `int_core__mtd_aggregated_metrics` and `int_core__monthly_aggregated_metrics_history_by_deal` into `int_mtd_aggregated_metrics` and `int_monthly_aggregated_metrics_history_by_deal` respectively. Make the respective reporting models read from the new cross models.
+7. Create the reporting cross models and re-do the reporting
+8. Once validated, deploy the Power BI reading from cross reporting.
+9. Once `core__monthly_aggregated_metrics_history_by_deal` and `core__mtd_aggregated_metrics` have no downstream dependants at all, remove them.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md:Zone.Identifier b/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Refactoring Business KPIs - 2024-07-05 5deb6aadddb34884ae90339402ac16e3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md b/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md
new file mode 100644
index 0000000..1477600
--- /dev/null
+++ b/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md
@@ -0,0 +1,83 @@
+# Reproducing versioning bug in dbt
+
+Related:
+
+- Task related to this page: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=30882
+- Original incident: [20240913-01 - `dbt run` blocked by “not in the graph” error](20240913-01%20-%20dbt%20run%20blocked%20by%20%E2%80%9Cnot%20in%20the%20graph%201030446ff9c980c291f1d57751f443ee.md)
+- Link to Github bug: https://github.com/dbt-labs/dbt-core/issues/8872
+- Recent version upgrade in our project to 1.9.8
+ - PR: https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5455
+ - Task: https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories?workitem=30881
+ - Page: [dbt 1.9.1 to 1.9.8 upgrade](dbt%201%209%201%20to%201%209%208%20upgrade%202100446ff9c980cbaa01e84c22bdd13c.md)
+
+—
+
+# Summary
+
+- I confirmed the bug still happens in `1.9.8`
+- I realised it won’t be fixed until dbt `1.10.0`, so we just wait until then.
+
+—
+
+I want to reproduce the bug properly to confirm that it happens in `1.9.1` but not in `1.9.8`.
+
+Steps:
+
+- Set up env with `1.9.1`
+- Reproduce issue with script
+- Set up env with `1.9.8`
+- Repeat script, confirm the error doesn’t apper
+
+Helpers:
+
+Downgrade to `1.9.1`
+
+```bash
+pip uninstall dbt-core -y
+pip install dbt-core==1.9.1
+dbt --version
+```
+
+Upgrade to `1.9.8`
+
+```bash
+pip uninstall dbt-core -y
+pip install dbt-core==1.9.8
+dbt --version
+```
+
+Cause the issue
+
+```bash
+# Delete target to begin from a clean state
+rm -rf target/*
+# Create a new model
+echo "SELECT 1" > models/debug_model.sql
+# Add a dependant
+echo "SELECT * FROM {{ref('debug_model')}}" > models/debug_dependant.sql
+# Build to cause a parse. Should run fine.
+dbt build --select debug_model
+# Create new version of same model and add yaml bits
+echo "SELECT 2" > models/debug_model_v2.sql
+# Add schema.yml contents
+echo -e "version: 2\n\nmodels:\n - name: debug_model\n description: \"Debug model\"\n latest_version: 1\n versions:\n - v: 1\n - v: 2" > models/schema.yml
+# Compile again, causes the bug
+dbt build --select debug_model
+
+# Optional cleanup
+rm models/debug_model.sql; rm models/debug_dependant.sql; rm models/debug_model_v2.sql; rm models/schema.yml; rm -rf target/*
+```
+
+Okay, funky…
+
+Using the above, I’m able to reproduce the issue in `1.9.8`, which I was not expecting at all.
+
+I’ve made [this comment](https://github.com/dbt-labs/dbt-core/issues/8872#issuecomment-2967223325) on [the original issue](https://github.com/dbt-labs/dbt-core/issues/8872). If it doesn’t get attention, I’ll probably open a new one.
+
+## Final explainer
+
+Okay, I found out what’s going on.
+
+It’s ok that `1.9.8` does not fix the bug. The PR that contains the bug fix was included in the branch that is working towards version `1.10.0`, and the dbt guys decided not to include it in any patch for `1.9.8`. That confirms that (1) it is not fixed in `1.9.8` and (2) that’s expected, not an issue really.
+
+We will have to wait to `1.10.0` to fix this.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md:Zone.Identifier b/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Reproducing versioning bug in dbt 2100446ff9c98034902fe1c7080b3698.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md b/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md
new file mode 100644
index 0000000..b6938f4
--- /dev/null
+++ b/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md
@@ -0,0 +1,59 @@
+# Request for ideas [Data Team]
+
+# What is this?
+
+The Data team is running a quite interesting line of work to improve our flagging system.
+
+Your ideas and thoughts are critical to steering this in the right direction. We’re sharing this template to collect them.
+
+# How to use it?
+
+- Make a copy of this page just for yourself.
+- Read through the C*ontext* section.
+- Fill in *Your ideas* section.
+- Once you’re done, share with Pablo.
+
+Note: we’ll be collecting these until 2025-05-02.
+
+# Context
+
+Screening guests is one of our core activities. Our hosts guests go through different types of journeys depending on the selected services, but they all lead to a final state for the booking: Approved, Flagged, Not Approved, etc. These currently serve as both risk info for our hosts, as well as a contractual factor for our protection services limits.
+
+The dreaming starts here: imagine Ben R. went to Hogwarts for a semester and came back with a **magical** **screening crystal ball.** This is a magical ball that can see the future of any booking, and tell us whether there is going to be damage to the property during it or not. Maybe even what damage and how costly will it be.
+
+Ben can plug this ball into some server in Truvi’s systems (since it’s 2025, the crystal ball obviously has a USB-C connector) and have all of our screenings be marked as Approved or Not Approved depending on what the crystal ball says. And again, every time, the booking will not have any issues if the crystal ball said it was approved, and reversely, will surely have damages and claims if the crystal ball says it will.
+
+# Your ideas
+
+Having that crystal ball would be really nice. But, if we had it… what would we do with it?
+
+**We would like your input on how do you think we could best leverage an incredible screening process to serve our customers and do business with new or modified services.**
+
+The question is open ended, but just to give a bit of guidance if you don’t know where to start…
+
+- Would you change the pricing/offering/delivery/terms of any of our services? How?
+- Would you propose new screening/protection/something-else services? How would they work?
+- Would you perhaps even try it out and entirely new line of business that has nothing to do with what we do today?
+- Who would be the ideal customer for your modified/new services?
+- How would you price them?
+
+Some additional tips:
+
+- Please, be dreamy. Don’t think much of constraints, restrictions, execution, etc. This is all about the what and the why, not about the who, the when or the how.
+- Feel no pressure to only add “good” ideas. We’re collecting creativity: we’re happy to hear all kinds of non-sense. We’ll simply park it somewhere in the future if it doesn’t work out :)
+- Explain yourself in whatever way feels more comfortable. We are not looking for specific format or length.
+
+ ⚠️ **Please remember to make a copy of this page. Three dots on the top right > Duplicate** ⚠️
+
+Your turn.
+
+***If we had the magical screening crystal ball…***
+
+## Thoughts
+
+- Features: Booking, Listing, Guest and Host
+ - If we detect G/H patterns that are risky → when selling, decide pricing accordingly and/or when extending contract, decide price accordingly
+- Current functionality: if we’re able to increase current screening and flagging performance, it would naturally increase overall risk management Truvi-wise.
+- Adjust Service Price to Protection Amount (or vice versa)
+ - Opt-In functionality: a flagged booking that reduces protection can allow to increase protection for an additional fee
+- On Guest Products, which is one off - dynamic pricing vs risk makes more sense. But what if we had Low Risk accounts that we could offer a Low Price services, is there market?
\ No newline at end of file
diff --git a/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md:Zone.Identifier b/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Request for ideas [Data Team] 1e30446ff9c980bc80b1fbd141fb25c4.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md b/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md
new file mode 100644
index 0000000..4ca52c0
--- /dev/null
+++ b/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md
@@ -0,0 +1,481 @@
+# Retrieving New Dash MVP info
+
+Screenshots and comments refer from a first set of explorations conducted beginning of August, thus things likely will evolve.
+
+→ To update the New Dash MVP performance Excel file, [jump into the last section here](Retrieving%20New%20Dash%20MVP%20info%2037429e2b559e492a881c088bdba5ad80.md).
+
+# Users in the MVP
+
+This is your starting point. Mainly, what is available in the New Dash MVP from a Product Management POV. [Figma link is here](https://www.figma.com/board/ULYFuegS7yllczlgL5DeA3/New-Dash-future-vision-map?node-id=0-1&t=qWyORzs7bOlCaTKA-0).
+
+
+
+According to the screenshot, not all Users are in the MVP, but only a subset of the KYG Lite / hostfully / non airbnb/booking.com/agoda PMCs are available. This accounts for 22 users.
+
+To retrieve these users, we can use:
+
+```sql
+select
+ UserId as MvpUserId
+from
+ dbo.Claim
+where
+ ClaimType = 'KygMvp'
+ and ClaimValue = 'true'
+```
+
+I am not sure what this Claim table is. Running a simple count on this exactly provides 22 users:
+
+
+
+This can be joined easily with the already existing User table. I did a dedicated one-by-one check with the e-mails and I confirm these 22 users are the ones available in the [User spreadsheet](https://docs.google.com/spreadsheets/d/1Iuq_paR3siLEJHPr18dJBpy9aIy6EVbN/edit?gid=1679347596#gid=1679347596).
+
+Moving on.
+
+# Basics of the MVP
+
+Take a look again at the Figma for the MVP, this will clarify the following.
+
+A **Product Service** is the detail of the Service being offered in New Dash. Not to be confused with a Product from the old dash.
+
+```sql
+**select** * **from** ProductService ps
+```
+
+
+
+> Note: In some places is called Product, in other ProductService.
+>
+
+You will notice there’s both an **Id** and a **ProductServiceFlag** that has a binary (y/n) potential over 2 (2^0=1, 2^1=2, 2^2=4, etc). Keep this in mind.
+
+A **Product Bundle** is a bundle of one or many different **Product Services**
+
+```sql
+**select** * **from** ProductBundle pb
+```
+
+
+
+Currently in the MVP we only have 2 Product Bundles, namely **Basic Screening** and **Basic Program**. Each **Product Bundle** is associated to a **Protection Plan** with the ProtectionPlanId.
+
+```sql
+select * from ProtectionPlan pp
+```
+
+
+
+At this stage, in the MVP, this table is not super informative since these **Protection Plans** are linked to just one different **Protection** each:
+
+```sql
+select * from Protection
+```
+
+
+
+In a nutshell, we have **Basic Screening** protection and **IPRP Basic**.
+
+You will notice that we have a column called **RequiredProductServices, that is either 1 or 257.** My guess is that this informs of which Product Services are needed for a given Protection. Remember the **ProductServiceFlag** field? Well:
+
+- Basic Screening ⇒ RequiredProductServices = 1 ⇒ ProductService.Id = 1 ⇒ Basic Screening
+- IPRP Basic ⇒ RequiredProductServices = 257 = 1 + 256
+ - ProductService.Id = 1 ⇒ Basic Screening
+ - ProductService.Id = 256 ⇒ Waiver Pro
+
+Meaning we have 2 Product Services (Basic Screening, Waiver Pro) currently available in 2 Product Bundles. Basic Screening service is available in both, and the only thing that changes is having or not Waiver Pro available within the Product Bundle.
+
+From a business point of view, this would mean that:
+
+The product bundle Basic Screening only contains the Product Service Basic Screening and it’s related to the Protection Basic Screening.
+
+The product bundle Basic Program contains 2 product services. The first one, is the Basic Screening (same as before). The second one is the only paid service in the MVP and refers to Waiver Pro.
+
+At this stage is worth to mention a couple of things:
+
+- Basic screening it’s inherent for all users according to the MVP New Dash documentation. Likely it’s linked to DisplayOnFrontEnd field in the ProductBundle table, since I assume that if the user cannot see it, he/she cannot remove it. Needs confirmation.
+- Basic screening it’s a free service, Waiver Pro is a payment service. More on this later 🙂
+
+Moving on.
+
+# Pricing, billing, invoicing, protecting
+
+This is not needed for what I am to do right now so it will need to be filled in later on, but I guess is important for the invoicing part of the requests.
+
+```sql
+select * from ProductServiceToPrice pstp
+```
+
+
+
+In a nutshell, each ProductService (based on the ProductServiceId) has a different pricing (Amount) for each Currency (CurrencyId). Additionally, it can be Billed with different methods (BillingMethodId), Invoiced with different methods (InvoicedMethodId) and Paid with different types (PaymentTypeId).
+
+Might be interesting to note that right now all UserProductBundleId are null, thus I assume at this stage no User has created a new Bundle other than the default.
+
+A similar behaviour can be found for the Protections:
+
+```sql
+select * from ProtectionPlanToCurrency pptc
+```
+
+
+
+In a nutshell, each Protection Plan (not Protection!) has different protection values (Baseline, Lower, Maximum) depending on the Currency (CurrencyId). In this case, I am not sure why we have SuperhogUserId here. Not clear to me the 3 different levels of protection.
+
+This section can be improved with providing multi-joins with other tables. Tables be explored: BillingMethod, InvoicingMethod.
+
+# User has Product Bundles
+
+Each **User** can have one or multiple **Product Bundles**.
+
+> **Important: This relationship means that a User CAN apply a Product Bundle into a Listing, not that necessarily the Product Bundle IS USED.**
+>
+
+In the case of no bundle applied, the default bundle applied is the Basic Screening. At this stage, all MVP users have the 2 Product Bundles: Basic Screening and Basic Program. The relationship User has Product Bundle is available in **UserProductBundle**.
+
+> **Note**: there are **MORE** users in the UserProductBundle that the ones available in the MVP Users. This is because there was a release that went out on the 18th June (way before the MVP launch on 30th July, which coincides with the dates in which we start seeing Bookings with bundles beginning to be populated. It appears mainly Hostfully bookings are being populated as well as a few OwnerRez bookings. These two PMS are likely the only ones with webhooks enabled currently, so the assumption is that at this time when this release went out, we had already begun to attach product bundles to bookings in the back-end. This is also why non-MVP users were having product bundles created.
+>
+
+In the following queries I will just cap the data displayed for these 22 users.
+
+```sql
+with UserMVP AS (
+select
+ UserId as MvpUserId
+from
+ dbo.Claim
+where
+ ClaimType = 'KygMvp'
+ and ClaimValue = 'true'
+)
+select
+ [User].CompanyName,
+ UserProductBundle.*
+from UserMVP
+left join [User]
+on UserMVP.MvpUserId = [User].Id
+left join UserProductBundle
+on UserProductBundle.SuperhogUserId = UserMVP.MvpUserId
+```
+
+
+
+I intentionally hide SuperhogUserId, but you can see the visually how this table looks like by the CompanyName, available from User table. Total row count is 44 = 22x2. Some interesting subjects:
+
+- We see how each user has 2 Product Bundles Id, from which we actually have the Name/Display Name in the table. Thus, no need to join UserProductBundle to ProductBundle (or at least, not at the moment). This looks a bit strange since usually the backend is fully normalised, but anyway.
+- We also have here the ChoosenProductServices, with the values 1 and 257. So likely this is the way to retrieve “this bundle has these services”, as seen before. However I do not have the query to process it so at this stage I’m assuming so based on the business context.
+
+# A User assigns a Product Bundle to a Listing
+
+Keep in mind that in the backend, Listing = Accommodation.
+
+This information is available here:
+
+```sql
+select * from AccommodationToProductBundle atpb
+```
+
+
+
+It would make sense since we have the **UserProductBundleId** that is linked to an **AccommodationId** on a period of time **StartDate - EndDate**. However, as you can see, the table is empty. This means that no user has assigned a Product Bundle to a Listing yet.
+
+# A Booking comes from New Dash MVP
+
+To see if a Booking comes from a New Dash MVP we should use `BookingToProductBundle` table.
+
+```sql
+select * from BookingToProductBundle btpb order by CreatedDate
+```
+
+
+
+When exploring this table, you’ll see that there’s already information available from days created before the launch of the MVP (July 30th). This has been explained in this note [here](Retrieving%20New%20Dash%20MVP%20info%2037429e2b559e492a881c088bdba5ad80.md).
+
+In order to filter by those that come from the New Dash MVP, we’ll force the join to retrieve Bookings that have UserProductBundle coming from Users that are in the MVP. In essence:
+
+```sql
+with UserMVP AS (
+select
+ UserId as MvpUserId
+from
+ dbo.Claim
+where
+ ClaimType = 'KygMvp'
+ and ClaimValue = 'true'
+)
+select
+ [User].CompanyName,
+ UserProductBundle.Name as ProductBundleName,
+ BookingToProductBundle.*
+from UserMVP
+left join [User]
+on UserMVP.MvpUserId = [User].Id
+left join UserProductBundle
+on UserProductBundle.SuperhogUserId = UserMVP.MvpUserId
+inner join BookingToProductBundle
+on BookingToProductBundle.UserProductBundleId = UserProductBundle.Id
+```
+
+
+
+However, we still see that for some of these MVP users, we have Bookings with Product Bundles created before the launch of the MVP. I’ll take an opinionated approach and enforce that a Bookings needs to have happened after the MVP launch on 30th of July; or at least, distinguish it in future queries.
+
+> Note: this is not perfect. If tomorrow we have a new user joining the MVP, we won’t be able to cut previous bookings at a dedicated migration date at user level. I checked and the Claim table does not contain any date related field, might be worth to dig deeper with Engineering team.
+>
+
+```sql
+with UserMVP AS (
+select
+ UserId as MvpUserId
+from
+ dbo.Claim
+where
+ ClaimType = 'KygMvp'
+ and ClaimValue = 'true'
+)
+select
+ [User].CompanyName,
+ UserProductBundle.Name as ProductBundleName,
+ BookingToProductBundle.*
+from UserMVP
+left join [User]
+on UserMVP.MvpUserId = [User].Id
+left join UserProductBundle
+on UserProductBundle.SuperhogUserId = UserMVP.MvpUserId
+inner join BookingToProductBundle
+on BookingToProductBundle.UserProductBundleId = UserProductBundle.Id
+where BookingToProductBundle.CreatedDate >= '2024-07-30'
+```
+
+
+
+So it seems we have 23 Bookings after the MVP launch, all with Basic Screening. This makes sense: since no Basic Program was applied into any listing it means that all of them should be coming from the Basic Screening bundle (the default) that only contains Basic Screening service.
+
+From here I will take more opinionated considerations, for the sake of data quality. Since there’s a Start - End date, I assume this table can take Booking duplicates if there’s a change in the UserProductBundleId in time. My opinionated decision is taking the last updated row, mainly, the one that has EndDate as [NULL]. At the moment, all of them have these behavior so it does not matter that much.
+
+At this stage, I assume that if at some point we have a Booking with a Product Bundle being Basic Program, it enforces the payment of the Waiver Pro service. I wonder if this will change in the future if a Product Bundle contains multiple Guest services (Waiver, Deposit): one thing is what the Guest sees, the other what he/she chooses. Since MVP is 0 guest interaction I assume it makes sense like this for a time, but might be worth to keep this in mind.
+
+# Tracking the MVP performance
+
+Based on all these considerations, here’s a nice MVP query. Until we have this data in DWH with a proper PBI report, we need to survive by updating an Excel file.
+
+Steps:
+
+1. Run [query below](Retrieving%20New%20Dash%20MVP%20info%2037429e2b559e492a881c088bdba5ad80.md) and export it to csv
+2. Copy-paste the csv it into the Excel file attached below in the Raw Data tab (erasing any existing data)
+3. On Excel, go to Data → Refresh All
+4. Save with date suffix (`_2024MMDD`) and upload into Lou’s shared folder [here](https://guardhog-my.sharepoint.com/personal/louise_dowds_superhog_com/_layouts/15/onedrive.aspx?FolderCTID=0x01200060678FFDDEE2D345B75C219A1E5F1356&id=%2Fpersonal%2Flouise%5Fdowds%5Fsuperhog%5Fcom%2FDocuments%2FProduct%20Improvements%2FNew%20dash%2FTND%20reports).
+
+### Tracking query
+
+This query could be optimised and improved. However I found it easier to understand how the data model looks like by taking additional steps. Just copy-paste this into DBeaver into Live prod schema and run.
+
+```sql
+/*
+
+THIS QUERY RETRIEVES INFORMATION FOR THE MVP MINIMAL PERFORMANCE TRACKING
+- Reads directly from the backend.
+- Assumes MVP users come from Claim table with ClaimType as KygMvp and ClaimValue is true
+- Assumes only retrieving the latest state, i.e., where EndDate is null
+- Assumes paying services are all of those that are not in the ProductBundle Basic Screening
+- For Bookings with Product Bundle, it excludes any booking before the MVP launch date on 30th July 2024
+
+*/
+
+with UserMVP AS (
+select
+ UserId as MvpUserId
+from
+ dbo.Claim
+where
+ -- THIS IS TO RETRIEVE MVP USERS --
+ ClaimType = 'KygMvp'
+ and ClaimValue = 'true'
+),
+UserHasBundles AS (
+select
+ UserMVP.MvpUserId,
+ [User].CompanyName,
+ COUNT(DISTINCT case when UserProductBundle.Name not in ('BasicScreening') then UserProductBundle.Id else null end) AS PaidService_UserProductBundleCount,
+ COUNT(DISTINCT UserProductBundle.Id) AS UserProductBundleCount
+from
+ UserMVP
+inner join [User]
+on
+ UserMVP.MvpUserId = [User].Id
+inner join UserProductBundle
+on
+ UserMVP.MvpUserId = UserProductBundle.SuperhogUserId
+where
+ UserProductBundle.EndDate is null
+group by
+ UserMVP.MvpUserId,
+ [User].CompanyName
+),
+BookingsPerUserBundle AS (
+select
+ UserMVP.MvpUserId,
+ UserProductBundle.Name as ProductBundleName,
+ CAST(MIN(case when BookingToProductBundle.CreatedDate >= '2024-07-30' then BookingToProductBundle.CreatedDate else null end) as date) as FirstBookingWithBundleCreatedDate,
+ CAST(MAX(case when BookingToProductBundle.CreatedDate >= '2024-07-30' then BookingToProductBundle.CreatedDate else null end) as date) as LastBookingWithBundleCreatedDate,
+ COUNT(DISTINCT case when BookingToProductBundle.CreatedDate >= '2024-07-30' then BookingToProductBundle.BookingId else null end) as TotalBookingsWithBundle,
+ COUNT(DISTINCT BookingToProductBundle.BookingId) as TotalBookingsWithBundle_FullHistory
+from
+ UserMVP
+inner join UserProductBundle
+on
+ UserProductBundle.SuperhogUserId = UserMVP.MvpUserId
+inner join BookingToProductBundle
+on
+ BookingToProductBundle.UserProductBundleId = UserProductBundle.Id
+where
+ BookingToProductBundle.EndDate is null
+group by
+ UserMVP.MvpUserId,
+ UserProductBundle.Name
+),
+ListingsPerUser AS (
+select
+ UserMVP.MvpUserId,
+ COUNT(DISTINCT Accommodation.AccommodationId) as TotalListings,
+ COUNT(case when Accommodation.IsActive = 1 then Accommodation.AccommodationId else null end) as ActiveListings
+from
+ UserMVP
+inner join AccommodationToUser
+on
+ UserMVP.MvpUserId = AccommodationToUser.SuperhogUserId
+inner join Accommodation
+on
+ Accommodation.AccommodationId = AccommodationToUser.AccommodationId
+group by
+ UserMVP.MvpUserId
+),
+ListingsWithBundlePerUser AS (
+select
+ UserMVP.MvpUserId,
+ UserProductBundle.Name as ProductBundleName,
+ CAST(MIN(AccommodationToProductBundle.CreatedDate) as date) as FirstListingWithBundleCreatedDate,
+ CAST(MAX(AccommodationToProductBundle.CreatedDate) as date) as LastListingWithBundleCreatedDate,
+ COUNT(DISTINCT AccommodationToProductBundle.AccommodationId) as TotalListingsWithBundle
+from
+ UserMVP
+inner join UserProductBundle
+on
+ UserProductBundle.SuperhogUserId = UserMVP.MvpUserId
+inner join AccommodationToProductBundle
+on
+ AccommodationToProductBundle.UserProductBundleId = UserProductBundle.Id
+where
+ AccommodationToProductBundle.EndDate is null
+group by
+ UserMVP.MvpUserId,
+ UserProductBundle.Name
+),
+ListingAggregation AS (
+select
+ ListingsPerUser.MvpUserId,
+ AVG(ListingsPerUser.TotalListings) AS TotalListings,
+ AVG(ListingsPerUser.ActiveListings) AS ActiveListings,
+ MIN(case when ListingsWithBundlePerUser.ProductBundleName not in ('BasicScreening') then ListingsWithBundlePerUser.FirstListingWithBundleCreatedDate else null end) as PaidService_FirstListingWithBundleCreatedDate,
+ MAX(case when ListingsWithBundlePerUser.ProductBundleName not in ('BasicScreening') then ListingsWithBundlePerUser.LastListingWithBundleCreatedDate else null end) as PaidService_LastListingWithBundleCreatedDate,
+ SUM(case when ListingsWithBundlePerUser.ProductBundleName not in ('BasicScreening') then ListingsWithBundlePerUser.TotalListingsWithBundle else 0 end) as PaidService_TotalListingsWithBundle,
+ MIN(ListingsWithBundlePerUser.FirstListingWithBundleCreatedDate) AS FirstListingWithBundleCreatedDate,
+ MAX(ListingsWithBundlePerUser.LastListingWithBundleCreatedDate) AS LastListingWithBundleCreatedDate,
+ SUM(ListingsWithBundlePerUser.TotalListingsWithBundle) AS TotalListingsWithBundle
+from
+ ListingsPerUser
+left join ListingsWithBundlePerUser
+on
+ ListingsPerUser.MvpUserId = ListingsWithBundlePerUser.MvpUserId
+group by
+ ListingsPerUser.MvpUserId
+),
+BookingsAggregation AS (
+select
+ BookingsPerUserBundle.MvpUserId,
+ MIN(case when BookingsPerUserBundle.ProductBundleName not in ('BasicScreening') then BookingsPerUserBundle.FirstBookingWithBundleCreatedDate else null end) as PaidService_FirstBookingWithBundleCreatedDate,
+ MAX(case when BookingsPerUserBundle.ProductBundleName not in ('BasicScreening') then BookingsPerUserBundle.LastBookingWithBundleCreatedDate else null end) as PaidService_LastBookingWithBundleCreatedDate,
+ SUM(case when BookingsPerUserBundle.ProductBundleName not in ('BasicScreening') then BookingsPerUserBundle.TotalBookingsWithBundle else 0 end) as PaidService_TotalBookingsWithBundle,
+ MIN(BookingsPerUserBundle.FirstBookingWithBundleCreatedDate) AS FirstBookingWithBundleCreatedDate,
+ MAX(BookingsPerUserBundle.LastBookingWithBundleCreatedDate) AS LastBookingWithBundleCreatedDate,
+ SUM(BookingsPerUserBundle.TotalBookingsWithBundle) AS TotalBookingsWithBundle
+from
+ BookingsPerUserBundle
+group by
+ BookingsPerUserBundle.MvpUserId
+)
+select
+ UserHasBundles.MvpUserId,
+ UserHasBundles.CompanyName,
+ UserHasBundles.UserProductBundleCount,
+ UserHasBundles.PaidService_UserProductBundleCount,
+ COALESCE(ListingAggregation.TotalListings,
+ 0) AS TotalListings,
+ COALESCE(ListingAggregation.ActiveListings,
+ 0) AS ActiveListings,
+ COALESCE(ListingAggregation.TotalListingsWithBundle,
+ 0) as TotalListingsWithBundle,
+ COALESCE(ListingAggregation.PaidService_TotalListingsWithBundle,
+ 0) as PaidService_TotalListingsWithBundle,
+ COALESCE(BookingsAggregation.TotalBookingsWithBundle,
+ 0) as TotalBookingsWithBundle,
+ COALESCE(BookingsAggregation.PaidService_TotalBookingsWithBundle,
+ 0) as PaidService_TotalBookingsWithBundle,
+ -- Listing Date Details --
+ ListingAggregation.FirstListingWithBundleCreatedDate,
+ ListingAggregation.LastListingWithBundleCreatedDate,
+ ListingAggregation.PaidService_FirstListingWithBundleCreatedDate,
+ ListingAggregation.PaidService_LastListingWithBundleCreatedDate,
+ -- Booking Date Details --
+ BookingsAggregation.FirstBookingWithBundleCreatedDate,
+ BookingsAggregation.LastBookingWithBundleCreatedDate,
+ BookingsAggregation.PaidService_FirstBookingWithBundleCreatedDate,
+ BookingsAggregation.PaidService_LastBookingWithBundleCreatedDate
+from
+ UserHasBundles
+left join ListingAggregation
+on
+ UserHasBundles.MvpUserId = ListingAggregation.MvpUserId
+left join BookingsAggregation
+on
+ UserHasBundles.MvpUserId = BookingsAggregation.MvpUserId
+
+```
+
+### Excel template
+
+Example of Excel (use it as the template)
+
+[NewDashMVP_PerformanceTracking.xlsx](NewDashMVP_PerformanceTracking.xlsx)
+
+Example of export:
+
+[NewDashMVP_20240805.csv](NewDashMVP_20240805.csv)
+
+# Migrating this into the DWH
+
+Pablo here. On my first day running this export (2024-08-12) I felt the frustration of being a human ETL machine and that pushed me to start cleaning this up already. So, I’m going to start to:
+
+- Bring the tables from Core used in the above query into the DWH with Airbyte.
+- Start modeling any intermediate entities that make sense.
+- Make a reporting table with the same contents we’re doing here for the export.
+
+## Bring tables over
+
+- The Core tables used in the export are:
+ - [x] `dbo.Claim`
+ - [x] `dbo."User"`
+ - [x] `UserProductBundle`
+ - [x] `BookingToProductBundle`
+ - [x] `Accommodation`
+ - [x] `AccommodationToUser`
+ - [x] `AccommodationToProductBundle`
+- Staging models:
+ - [x] `dbo.Claim`
+ - [x] `UserProductBundle`
+ - [x] `BookingToProductBundle`
+ - [x] `AccommodationToProductBundle`
+- Intermediate models:
+ - [x] `int_core__user_product_bundle`
+ - [x] int_core__accommodation_to_product_bundle
+ - [x] int_core__booking_to_product_bundle
+ - [x] int_core__new_dash_user_overview
\ No newline at end of file
diff --git a/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md:Zone.Identifier b/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Retrieving New Dash MVP info 37429e2b559e492a881c088bdba5ad80.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md b/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md
new file mode 100644
index 0000000..da1674b
--- /dev/null
+++ b/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md
@@ -0,0 +1,21 @@
+# Retrospectives
+
+[20240611 Retro](20240611%20Retro%209b8bbbe210d04a55a753616c2fb0be2c.md)
+
+[20240709 Retro](20240709%20Retro%206c815a39840f408fbd935c4b3e937be3.md)
+
+[20240819 Retro](20240819%20Retro%2088ed749ed43b4eb7a2d277ddd2b03747.md)
+
+[20240913 Retro](20240913%20Retro%20f75a7d97742d492fb3587844fa700926.md)
+
+[20241008 Retro](20241008%20Retro%201190446ff9c9807982abfe76f161994f.md)
+
+[20241112 Retro](20241112%20Retro%2013c0446ff9c980b0a942d10a7c68583c.md)
+
+[20241210 Retro](20241210%20Retro%201580446ff9c9803ea397d22f31bade85.md)
+
+[20250214 Retro](20250214%20Retro%2019a0446ff9c980da9bfdfffa4b982bad.md)
+
+[20250319 Retro](20250319%20Retro%201bb0446ff9c980f09345f58c8517c945.md)
+
+[20250505 Retro](20250505%20Retro%201ea0446ff9c98035943ffc3c3f4a6306.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md:Zone.Identifier b/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Retrospectives ab52ef5a73a040b6aa9a07121b0e0aac.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md b/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md
new file mode 100644
index 0000000..265042a
--- /dev/null
+++ b/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md
@@ -0,0 +1,29 @@
+# Revenue naming - 2024-09-30
+
+**Goal**: agree on what we consider as “Revenue” in the different reporting areas. Clarify revenue splits and adapt naming conventions.
+
+**Context:** Main discrepancy between “Finance Revenue” and “Data Revenue” is the fact of deducting/not deducting the Waiver Amount Paid back to Hosts
+
+Current setup in Main KPIs Power BI Reporting
+
+- **`Total Revenue`**
+ - **`Invoiced Operator Revenue`**, coming from Xero
+ - **`Invoiced Booking Fees`** (Booking net fees)
+ - **`Invoiced Listing Fees`** (Listing net fees)
+ - **`Invoiced Verification Fees`** (Verification net fees)
+ - **`Invoiced APIs Revenue`**, coming from Xero
+ - **`Invoiced E-Deposit Fees`** (E-deposit net fees)
+ - **`Invoiced Guesty Fees`** (Guesty net fees)
+ - **`Guest Revenue`**, coming from Backend + Xero
+ - **`Waiver Net Fees`**, coming from Backend
+ - **`Waiver Amount Paid by Guests`** - **`Waiver Amount Paid back to Hosts`** (Xero)
+ - **`Deposit Fees`**, coming from Backend
+ - **`Check-in Hero Amount Paid by Guests`**, coming from Backend
+
+→ For **Invoiced Operator Revenue** and **Invoiced APIs Revenue**, `net` means `invoice` - `credit notes`
+
+→ We also have measures of **Guest Payments** that do not deduct the **Waiver Amount Paid back to Hosts**.
+
+→ We also compute **Host Resolutions Amount Paid** but these are NOT applied to any Revenue aggregation metric (standalone situation).
+
+Link to Whiteboard: https://guardhog-my.sharepoint.com/:wb:/g/personal/pablo_martin_superhog_com/EYv55vSak_pNhU3ShjFgCvQBk35XD75oo1ozxAmhSFYWHg?e=KYr8bD
\ No newline at end of file
diff --git a/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md:Zone.Identifier b/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Revenue naming - 2024-09-30 1110446ff9c980cfaf13ec0121b9c2c7.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md b/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md
new file mode 100644
index 0000000..3caad3c
--- /dev/null
+++ b/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md
@@ -0,0 +1,10 @@
+# SSH Pubkeys
+
+Here we store the team’s SSH key pubkeys (**not the private key!!!**) so that our beloved engineers can place them wherever necessary without bothering you.
+
+| Member | SSH PubKey |
+| --- | --- |
+| Pablo | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjP3laspy/wCRJjo1PLMPqtiSC9KMapVo5yy2W63+vQesvLR5PfPsRFTezNCWmFOt0rlv0GDvHPU4bDuxWzjKuS7ztILdTwoMm78RHles+7BHu78DGoMBoqFTFNLmgFYPtyfDtkFnckE04VZYHIRQxpjLcQ8/L1x1dNk6+IhfobIHCG3ONhy/bEbK6f1ZN3Wh7jT8Rc7EqMZ5CgQr3T8a797BOI3tgvVRwGHLacrahPRWDjq9o4X1h50IUAr2NXhQuG0G8u3sxuFFXrzdP5tcTjeUYhi/zHoRjafJsHUvGVbb//u8KqeXHXA2XP6+eFf96yQ+t6qmK5vUAEAl6eOshe1XRBbUc/MNlMnxfgoZHquVFqS1H+C/OWHP+EpGe2qHcv2iMstfy2p5AinK0elyN6/l6haV6Ai+TAP4/EGeuMHybwiWykxyPtLKW2mKDwvinlMSdHoao09vAB6jeKPRROn2u1WdkzykXXUCeLbd5MVMCKgC4KctLighROMaFoBk= pablo@hachepe |
+| PabloGanzúa | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr4oGnWQ8K9qWVsZYoE738CCr4113bEG/Khkh/LVTYs/gxT12gny+fw4LWwY6r4nNhYMF3bQEdAYN1SiRlZgqav/lEivLblB6gl/CsjcU6yVAaH+dKgcah0fZZidMwl0FuBhoSV9ppdeWfK3M1WzVR+Kf90LnT5Dsv8QgyXTIiZUfbMtXS7v6oU4o4O9h+bPCRLLaLbKHtzL17PepU0fLdh4WatQoV6xsgRkzU4cwINI8mR4/GrpJmXjt48cLJORoU9s1pzTJQz3WqOSDUtQHnEFLJJF6CWeASHlKhV0Ng3h/7gauns8rqtiwXViuREE4w6IU6/YBRJAArf7jLyTrKZHTjJk0zj1aAHTxSTFJYn1JofUJmZkOIoQ5FuKybRapFFwJzUx1IMhFRYXrqpCDQcGv9mwJWkcr0ra0tUJT0XbAkCiNS8YwTpQoOD80QuIfYuM2uWp/NQ8iWX5V0knEmPhaFGIfdXR5BM4agdYDbf5sJFjplpiMO2lYjVk39NB0= pablo@hachepe |
+| Uri | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDg5BT2t2x42W2U9NaQZDGBu0kfDbaNAvNbXjBl6IOB8hSX1Q10f8M/yE056P4F16CueGB4nNQxSyJWybUccA2FArgzLMQ2PjJJ3hHMMTQEESl9ObGpyOAiEh+Nn3/RNUeC7IuqNwiMh0TZh4Yoeq+aPKIzTfgRAjyuhNd/HG1aCNPPonrrr5eRgtOZRSydOA7F3HwVEFmuZ49wO0X+ATJtQvIm82m/TpXfu6XvUuqxjkhfGQ7M987M2b6dpwc7qU3PG6R/HDc20/Sv8HvwM40h7JvKiDjanVp2Hi0pS9seuqJRZwlNq4NYjqdcz3pdd2zF0uQ1wkJUOXU99GYUl9NEFxZ4b+DEEww5uKu4MqS16MD3KrxuRkj+XigxpMJNdCYNt/No88jryJOpx8F5a8P8QyvE10pyGZLYWJY7tziLksUYscLXcnuufXFhm7ixG88+lPqM4FYeyvVeVmIl/csGaSPk170al/IQa7Rpf3E8WUyrMu1up+WPGj7poWu2Nl26aaD61oJ4S4ddDNZIicg7jket8T7ukemsBNIWRSCTeUJTW4uf2wJTeFV+mfvIXn2Ne4gvNE0cZDCs0WcLjSQKK/WYsKOZEtx0S1w79V+uIPu8EU77c9OtvAgWS6o0J8y3GNAkTuqtTw7z7etQ4tSvufXIksJArT3wPPbsk1E6IQ== uri@LAPTOP-95ALTM7S |
+| Joaquín | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8h3nxm+BAo7Bul8iK0MVy/2o/jSc0eS0kNuAApXAYCSjI2l+EVX9gSC34DGPcw8TeFkkdCMH0nZHbPmRoPl4lxX2krxPyEAnzBsZ9g8iJTdYnJE1q2K6UdbWsPQ6KHvCjmgvJETcMHYvlYe+ht1mW8XKKnN8XaHLebnRzswP9WcGQinOZVWaDENzXUBBR1sdg1SPFrKiMa5oLfnCDI5ieKqP6v+x3obtl68gEkU+PZNhVqj7nlHcLHHyIhwvhZpX9xzku59phlLqAknXb4cyw/GT/PCYRA6mimRlnZ2eTl9SB06l/TtRj/HkSbQkWvO5QoqxC0bxbIeOxOKaDXiWSdaC6O/xRUDXCWGA2iYRFfO+RYF4DarxpnirJAYF2RiYAnVqi+eYPixh86qJJNZ8SeBzFAFKxjifP5gmcdRt9cunmZvuQarL6q3O5r45LvFD99yxU6FlDVmw+POuy9otGHAy7opGSyPTa9WstSRhhCsr2DHlv788d8d8qFBv3nOC39uu4LveR5o2/G+B/AhuubZ52wox0sxZbSAE3yEirK5DSR3v4iTDoL8LR++0Q+jI/jm+rCvh8anyILeTXtidJwTAgx1poMU6H01Lqi2ZxV+FI5cTn3Rc5Fcwbaw4DfILxtdHWW41Q7ugcXZGS6GRnGeBgSVmDTmK9PrMk294zhw== pablo@hachepe |
\ No newline at end of file
diff --git a/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md:Zone.Identifier b/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/SSH Pubkeys 8aecd0622caf4512a22ee099ff49f208.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md b/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md
new file mode 100644
index 0000000..a39c220
--- /dev/null
+++ b/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md
@@ -0,0 +1,92 @@
+# Services and Revenue modelling
+
+# Modelling strategy within DWH
+
+We want to have 3 main tables within DWH: `booking_service_line`, `booking_service_detail` and `booking_summary`
+
+1. **Booking Service Line** should contain the information of “this booking has these services with this prices history”. It’s an append-only table that logs all changes of prices that apply to a booking service depending on when these services are supposed to be charged, are actual charged, if there’s refunds, booking cancellation, etc. Summing all line prices over a Booking Service Detail should provide the current price of the service for the booking.
+2. **Booking Service Detail** should contain the information of “this booking with these booking attributes has these services with these service attributes and these are the unit and total prices as well as the moment in time it’s charged that apply for this service in the current state”.
+3. **Booking Summary** it’s an aggregate of the previous Booking Service Detail and should contain the information of “this booking with these booking attributes has this total amount supposed to be charged, the current amount charged as of today, and the remainder of what’s left to be charged.”
+
+> Note that at this moment we’re not modelling protections but ideally we should be able to fit them here.
+>
+
+## Booking Service Line
+
+Primary key: `id_booking_service_line`
+
+Additional unique test: `id_booking_service_detail` + `service_line_created_at_utc`
+
+> All string fields are capitalised in DWH.
+>
+
+| Field Name (DWH) | Field Name (backend) | Properties | Type | Possible Values | Description | Feasible? |
+| --- | --- | --- | --- | --- | --- | --- |
+| id_booking_service_line | ? | PK, not null | bigint | 1,2,3… | Unique identifier of the booking service line | ? |
+| id_booking_service_detail | BookingViewToService.Id | FK, not null | bigint | 1,2,3… | Unique identifier of the booking service detail | Yes |
+| id_booking | Booking.Id | FK, not null | bigint | 1,2,3,… | Unique identifier of the booking | Yes |
+| service_line_created_at_utc | ? | not null | timestamp | 2024-11-15 02:22:09.130 | Timestamp of when the booking_service_line record was first created in the backend | Yes |
+| service_business_scope | - | not null | string | PLATFORM (for the time being) | Identifies the main business scope (platform, guest products, apis) according to New Pricing documentation | Yes, handled on Data side |
+| service_business_type | - | not null | string | SCREENING, DEPOSIT_MANAGEMENT, PROTECTION, UNKNOWN | Identifies the service type according to New Pricing documentation | Partially, can be handled on Data side, ideally tagged in the backend in ProductService for Screening vs. Deposit Management
+We need backfill of Ids in BookingViewToService to remove UNKNOWN cases |
+| service_source | - | not null | string | PRODUCT, PROTECTION, UNKNOWN (for platform) | Identifies the source of the information used (Product or Protection based on how backend is modelised) | Partially
+We need backfill of Ids in BookingViewToService to remove UNKNOWN cases |
+| service_name | ? | not null | string | BASIC SCREENING, WAIVER PRO, BASIC PROTECTION, etc | Identifies the service name applied to a booking | Partially
+We need backfill of Ids in BookingViewToService to remove usage of the ServiceName to fall into a more robust logic |
+| currency_code | ? | not null | string (char(3)) | GBP, USD, EUR, etc. | Identifies the currency in which the price is charged. Can be null. | Yes |
+| service_line_price_local_curr | ? | not null | decimal | 35 | Identifies the total line price of that service in a given booking in a given currency. Can vary over time depending on the service status, payments, etc. | No, need single source of truth in the backend |
+| service_line_price_in_gbp | ? | not null | decimal | 21.675 | Identifies the total price of that service in a given booking converted in GBP. Can be null. Can vary over time depending on the service status, payments, etc, as well as it can vary over time until the charge date due to the currency rate estimation in the future. | No, need single source of truth in the backend |
+| service_line_charge_date_utc | ? | not null | date | 2025-02-01, 2024-12-01, etc | Identifies the moment in time in which the service is charged. | No, need single source of truth in the backend |
+| service_line_sign | ? | not null | integer | -1, 1 | Identifies if the price appearing in service_line_price fields need to be added or subtracted. | |
+| service_line_comment | ? | | string | | A comment addressing the line. Can be null. | |
+| is_adding | ? | not null | boolean | | Flag to identify if the line adds a charge or not. | |
+| is_reverting | ? | not null | boolean | | Flag to identify if the line reverts a previous charge or not. | |
+
+## Booking Service Detail
+
+Primary key: `id_booking_service_detail`
+
+Additional unique test: `id_booking` + `service_name`
+
+> All string fields are capitalised in DWH.
+>
+
+## Booking Summary
+
+| Field Name (DWH) | Field Name (backend) | Properties | Type | Possible Values | Description | Feasible? |
+| --- | --- | --- | --- | --- | --- | --- |
+| id_booking_service_detail | BookingViewToService.Id | PK, not null | bigint | 1,2,3… | Unique identifier of the booking service detail | Yes |
+| id_booking | Booking.Id | FK, not null | bigint | 1,2,3,… | Unique identifier of the booking | Yes |
+| service_detail_created_at_utc | BookingViewToService.CreatedDate | not null | timestamp | 2024-11-15 02:22:09.130 | Timestamp of when the booking_service_detail record was first created in the backend | Yes |
+| service_detail_updated_at_utc | BookingViewToService.UpdatedDate | not null | timestamp | 2024-11-15 02:22:09.130 | Timestamp of when the booking_service_detail record was first created in the backend | Partially, at the moment using BookingViewToService information but should be updated with the latest invoicing line update |
+| booking_created_at_utc | Booking.CreatedDate | not null | timestamp | 2024-11-15 02:21:18.717 | Timestamp of when the corresponding booking record was first created in the backend | Yes |
+| booking_updated_at_utc | Booking.UpdatedDate | not null | timestamp | 2024-11-15 02:22:09.073 | Timestamp of when the corresponding booking record was last updated in the backend | Yes |
+| booking_check_in_at_utc | Booking.CheckIn | not null | timestamp | 2024-12-27 00:00:00.000 | Timestamp of the check in of the booking | Yes |
+| booking_check_out_at_utc | Booking.CheckOut | not null | timestamp | 2025-01-03 00:00:00.000 | Timestamp of the check out of the booking | Yes |
+| service_business_scope | - | not null | string | PLATFORM (for the time being) | Identifies the main business scope (platform, guest products, apis) according to New Pricing documentation | Yes, handled on Data side |
+| service_business_type | - | not null | string | SCREENING, DEPOSIT_MANAGEMENT, PROTECTION, UNKNOWN | Identifies the service type according to New Pricing documentation | Partially, can be handled on Data side, ideally tagged in the backend in ProductService for Screening vs. Deposit Management
+We need backfill of Ids in BookingViewToService to remove UNKNOWN cases |
+| service_source | - | not null | string | PRODUCT, PROTECTION, UNKNOWN (for platform) | Identifies the source of the information used (Product or Protection based on how backend is modelised) | Partially
+We need backfill of Ids in BookingViewToService to remove UNKNOWN cases |
+| service_status | BookingViewToService.Status | not null | string | NOFLAGS, FLAGGED, -, NOCHECKS, REJECTED, etc | Identifies the status of the service applied to a booking | Yes |
+| booking_status | BookingState.Name | not null | string | CANCELLED, APPROVED, NOTAPPROVED, INCOMPLETEINFORMATION, etc | Identifies the status of the booking | Yes |
+| service_name | ProductService.FullName | ProtectionPlan.FullName | BookingViewToService.ServiceName | not null | string | BASIC SCREENING, WAIVER PRO, BASIC PROTECTION, etc | Identifies the service name applied to a booking | Partially
+We need backfill of Ids in BookingViewToService to remove usage of the ServiceName to fall into a more robust logic |
+| payment_type | PaymentType.FullName | | string | AMOUNT, PERCENTAGE, UNKNOWN, [NULL] | Identifies if the service price unit is an actual amount or a percentage of another value. It can be null if the host currency is not populated. | Partially
+We need backfill of Ids in BookingViewToService to remove usage of the ServiceName to fall into a more robust logic |
+| price_base_unit | BillingMethod.FullName | | string | PER BOOKING, PER NIGHT, UNKNOWN, [NULL] | Identifies if the service price unit needs to be applied per booking or per number of nights between check-in and check-out. It can be null if the host currency is not populated. | Partially
+We need backfill of Ids in BookingViewToService to remove usage of the ServiceName to fall into a more robust logic |
+| invoicing_trigger | InvoicingMethod.FullName | | string | PRE-BOOKING, AT DEPOSIT PAYMENT, AT WAIVER PAYMENT, POST-CHECKOUT, UNKNOWN, [NULL] | Identifies the moment in time in which this service needs to be charged. It can be null if the host currency is not populated. | Partially
+We need backfill of Ids in BookingViewToService to remove usage of the ServiceName to fall into a more robust logic |
+| currency_code | Currency.IsoCode | | string (char(3)) | GBP, USD, EUR, etc. | Identifies the Host currency. Can be null. | Yes |
+| service_unit_price_local_curr | ProductServiceToPrice.Amount | ProtectionPlanToPrice.Amount | | decimal | 5 | Identifies the service unit price in the Host currency. Can be null. | Yes |
+| service_unit_price_in_gbp | - | | decimal | 3.235 | Identifies the service unit price converted to GBP with the rate of the date of charge. Can be null. Can vary over time until the charge date due to the currency rate estimation in the future. | No, need single source of truth in the backend (specifically for charge date). Will depend on Booking Service Line |
+| service_total_price_local_curr | - | | decimal | 35 | Identifies the current total price of that service in a given booking in the Host currency. Can be null. Can vary over time depending on the service status, payments, etc. | No, need single source of truth in the backend. Will depend on Booking Service Line. |
+| service_total_price_in_gbp | - | | decimal | 21.675 | Identifies the current total price of that service in a given booking converted in GBP. Can be null. Can vary over time depending on the service status, payments, etc, as well as it can vary over time until the charge date due to the currency rate estimation in the future. | No, need single source of truth in the backend. Will depend on Booking Service Line. |
+| service_charge_date_utc | - | | date | 2025-02-01, 2024-12-01, etc | Identifies the moment in time in which the service is charged. | No, need single source of truth in the backend. Will depend on Booking Service Line. |
+| is_missing_currency_code | - | not null | boolean | True/False | Flag to identify if the applied service has no currency informed. | Yes |
+| is_booking_cancelled | - | not null | boolean | True/False | Flag to identify it the Booking is cancelled or not. | Yes |
+| is_paid_service | - | not null | boolean | True/False | Flag to identify it the service total price is strictly greater than 0 or not. | Yes |
+| is_upgraded_service | - | not null | boolean | True/False | Flag to identify if the service is an upgraded version, meaning, it’s not a Basic Screening. | Yes |
+
+To be filled, will retrieve further attributes from Booking (id_user_host, id_user_guest, etc) as well as aggregate information of the Booking Service Detail (prices, etc).
\ No newline at end of file
diff --git a/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md:Zone.Identifier b/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Services and Revenue modelling 1420446ff9c980118e0cfffa7c41f369.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md b/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md
new file mode 100644
index 0000000..717829e
--- /dev/null
+++ b/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md
@@ -0,0 +1,62 @@
+# Set up SSH keys
+
+As a member of the Data Team, you’re going to need to use SSH keys for multiple reasons. Most importantly, you should have one personal key pair.
+
+If you don’t know what the hell SSH keys or you kind of know but you always have a headache, you have two options on how to deal with this:
+
+- You follow the instructions here like a robot, and talk with Pablo whenever something is not working as expected.
+- You consume these wonderful materials, finally understand what the hell an SSH key is and how it works, and you use these instructions as a guide, but you know what the hell is going on so you have some chance at dealing with issues (of course, you can still talk with Pablo when something doesn’t work)
+ - https://www.youtube.com/watch?v=dPAw4opzN9g (How keys work)
+ - https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys (How keys work and practical info)
+
+## Creating your key pair
+
+- You will need to have a Linux terminal available. If you still don’t have WSL working on your laptop, get this done first: [How to set up WSL and Docker Desktop](How%20to%20set%20up%20WSL%20and%20Docker%20Desktop%204771651ae49a455dac98d7071abcd66d.md)
+- You should also have Keeper ready. But that’s fine because it’s very first thing you did when you joined the company, right even before learning how the coffee machine works… right?
+ - Just in case, a reminder on onboarding: [Onboarding checklist](Onboarding%20checklist%20d5eb8cb36b404fc9a0ccacddf9862001.md)
+- Open up an Ubuntu terminal
+- Run `ssh-keygen -t rsa -b 4096` (note: DevOps only accepts RSA keys, not modern EC ones. Nasty, nasty microsoft)
+ - You will get ask where do you want to store the key and how do you want to name it. Up to you. I advise you to store them in `home//.ssh/`. Feel free to use any name.
+ - You will be asked to add a passphrase. This is highly recommended. Make sure you note the passphrase, there’s absolutely no way to recover this.
+- This will have created two files
+ - One with the name you provided (your private key)
+ - Another with the same name, but an additional `.pub` at the end (the matching public key. These two match together. That’s why it’s a *key pair*).
+ - Now make an entry in Keeper, private to you, to store these. You should store the passphrase in some text field, and the two key files (private and public) **as attachments.** Don’t store them as text, high chances of mistakes doing that.
+- Finally, change the permissions on your private key by traveling with the terminal to `~/.ssh/` and running `chmod 400 `.
+
+## Adding your keys to Azure Devops
+
+There are two steps to set up SSH access to Azure Devops: placing your public key and configuring your ssh client to use your private key.
+
+To place your public key:
+
+- Go to https://guardhog.visualstudio.com/.
+- Go to `User Settings`. It’s the little icon of a person with a gear on the top right.
+- Click on `SSH Public Keys`
+- In the new page, add a new key.
+ - You can give it any name.
+ - The `Public Key Data` should hold the public key. To fill it in, run `cat ~/.ssh/`, copy the output and paste it here.
+- That should be it. You should now see the public key listed.
+
+To configure your ssh client:
+
+- Create (or edit if it already exists) the file in `~/.ssh/config`
+- Add a block like this:
+
+ ```bash
+ Host ssh.dev.azure.com
+ Hostname ssh.dev.azure.com
+ IdentityFile ~/.ssh/
+ ```
+
+- That’s it. Your SSH client will now know which key to use when interacting with Devops.
+
+Finally, be aware you might experience some buggy behaviour with the URL paths provided by Devops when cloning Git repositories with SSH. **Do not fall back to HTTP just because it’s giving you a headache.** The problem is probably easily fixable, you can read more here: [Little Git SSH cloning trick](Little%20Git%20SSH%20cloning%20trick%203d33758de34742b9ac180fd9c7b5e6b3.md)
+
+## Using SSH to access production VMs
+
+Some of the machines in production are accessible through SSH.
+
+If you need to log in there, depending on the circumstances, either we should add your public key to the right machine, or you should receive access to some of the service SSH keys that exist in the team.
+
+If you need this, contact Pablo to discuss and he will sort things out for you.
\ No newline at end of file
diff --git a/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md:Zone.Identifier b/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Set up SSH keys 6b05d5e432164d30b6546bb8bb4ba524.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md b/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md
new file mode 100644
index 0000000..6420515
--- /dev/null
+++ b/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md
@@ -0,0 +1,45 @@
+# Sync Meeting Notes
+
+[2025-07-02 - Data Planning](2025-07-02%20-%20Data%20Planning%202240446ff9c980fc8c7cd4915b55ec12.md)
+
+[2025-06-25 - Data Planning](2025-06-25%20-%20Data%20Planning%2021d0446ff9c980e1b344ebe772a3b980.md)
+
+[2025-06-18 - Data Planning](2025-06-18%20-%20Data%20Planning%202150446ff9c980d8be61f2048a1546fa.md)
+
+[2025-06-11 - Data Planning](2025-06-11%20-%20Data%20Planning%2020f0446ff9c980269e0bddf562b133a0.md)
+
+[2025-06-04 - Data Planning](2025-06-04%20-%20Data%20Planning%202080446ff9c9803cba09d8b32b43501d.md)
+
+[2025-05-28 - Data Planning](2025-05-28%20-%20Data%20Planning%202010446ff9c980c3b428dd7d76aaffb5.md)
+
+[2025-05-21 - Data Planning](2025-05-21%20-%20Data%20Planning%201fa0446ff9c980f4a7b0d29c47c12c12.md)
+
+[2025-05-14 - Data Planning](2025-05-14%20-%20Data%20Planning%201f30446ff9c98022bcbae63f192a3e11.md)
+
+[2025-05-07 - Data Planning](2025-05-07%20-%20Data%20Planning%201ec0446ff9c980929fe1cb35108c6436.md)
+
+[2025-04-30 - Data Planning](2025-04-30%20-%20Data%20Planning%201e50446ff9c9806983f3c2c7de69b3fb.md)
+
+[2025-04-16 - Data Planning](2025-04-16%20-%20Data%20Planning%201d60446ff9c980aca796c1791efc320e.md)
+
+[2025-04-09 - Data Planning ](2025-04-09%20-%20Data%20Planning%201d00446ff9c98009a080e7cb8c5732af.md)
+
+[2025-03-26 - Data Planning](2025-03-26%20-%20Data%20Planning%201c20446ff9c980539269f1a4871bb0c7.md)
+
+[2025-03-19 - Data Planning ](2025-03-19%20-%20Data%20Planning%201bb0446ff9c98072bdbfcc71ff6a028b.md)
+
+[2025-03-12 - Data Planning](2025-03-12%20-%20Data%20Planning%201b40446ff9c98043a80bf1520165e3a4.md)
+
+[2025-03-05 - Data Planning ](2025-03-05%20-%20Data%20Planning%201ad0446ff9c9807aa104ef7a24b97d9e.md)
+
+[2025-02-26 - Data Planning](2025-02-26%20-%20Data%20Planning%201a60446ff9c980d7974cd6d6a1314068.md)
+
+[2025-02-19 - Data Planning](2025-02-19%20-%20Data%20Planning%2019e0446ff9c98063be3df87905cc8ca4.md)
+
+[2025-02-12 - Data Planning](2025-02-12%20-%20Data%20Planning%201970446ff9c980039759e389ac07cae9.md)
+
+[2025-02-05 - Data Planning](2025-02-05%20-%20Data%20Planning%201910446ff9c9803b8885da35ba2d9b71.md)
+
+[2025-01-29 - Data Planning](2025-01-29%20-%20Data%20Planning%201890446ff9c9803281b2eba928ce1a86.md)
+
+[2025-01-22 - Data Planning](2025-01-22%20-%20Data%20Planning%201830446ff9c980878e75c412ed07f0a4.md)
\ No newline at end of file
diff --git a/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md:Zone.Identifier b/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Sync Meeting Notes 1830446ff9c9809d89a0e8a5321b1697.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md b/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md
new file mode 100644
index 0000000..d9225f3
--- /dev/null
+++ b/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md
@@ -0,0 +1,161 @@
+# Technical Documentation - 2024-11-12
+
+# Purpose
+
+The `intermediate/kpis` folder is dedicated to KPIs modelisation, which include mostly any relevant dimension, measure and time aggregation needed for transforming data into business metrics. As Data Team, we should provide the maximum possible quality of KPIs.
+
+# Convention
+
+## Model names
+
+- Any model within the folder `intermediate/kpis` needs to follow this convention: `int_kpis__{structure_type}_{time_dimension}_{relevant_entity_name}`.
+- Structure types can be the following:
+ - `lifecycle`: any modelling that classifies certain behavior on a given entity that can vary over time. For instance, the listing lifecycle in terms of booking creation could categorise the lifecycle of the listing based on whether a listing being new, active, never booked, inactive, etc.
+ - `dimension`: any modelling that allows to segment or categorise data, so it can provide descriptive context for the measures. Segments resulting from lifecycles would likely have an equivalent dimension model.
+ - `metric`: any model that computes a given metric per different dimensions that is not aggregated. This means that each dimension will have a dedicated column within the model.
+ - `agg`: a model that aggregates the data into a 1) date range, 2) a dimension and 3) a dimension value for any given metric. These will always depend on metrics models.
+- Time dimension can be the following:
+ - `daily`: if the time granularity is daily
+ - `monthly`: if the time granularity is monthly, meaning metrics are aggregated to the month
+ - `mtd`: if the time granularity is month-to-date, meaning metrics are cumulative to a certain date of the current month and so it's the case for the same days on the month of the previous days.
+ - others.
+- `Relevant entity name` needs to easily and uniquely identify the entity being modelled, such as Created Bookings.
+- The only exception is `int_kpis__dimension_dates`, that even though is granular at daily level, it's simplified on purpose, to avoid the model being `int_kpis__dimension_daily_dates`.
+
+## Logic
+
+- The model that contains the deepest granularity for each entity should be the one handling the data gathering to compute raw metrics and dimensions. Likely, this model will be in the form of `int_kpis__metric_daily_{relevant_entity_name}`. In this case, joins outside of the `kpis` folder are accepted and expected in order to gather dimensions and metrics.
+- Downstream models within `kpis` folder, indistinctly of these being `metric` or `aggregated` models, **should not join** with other models outside of the `kpis` folder. Further enrichment can be done with outside models as long as the resulting models are directly located outside the `kpis` folder, namely into cross/general folders.
+- Downstream models within `kpis` folder could eventually join with other models within the `kpis` folder in order to create weighted or converted metrics.
+
+### Dimension aggregation
+
+Models that are dimensions aggregates, namely `aggregated` or `agg` models, follow a common pattern of `date`, `dimension` and `dimension_value`.
+
+For models that are not daily, such as `monthly` or `mtd`, date is substituted by a time range defined within `start_date` and `end_date`. Generally, `end_date` is part of the primary key alongside `dimension` and `dimension_value`, while `start_date` is only displayed for information purposes.
+
+In order to specify which dimensions are considered to be retrieved for each aggregate model, we use the `get_kpi_dimensions_per_model` macro. This macro only takes as argument the name of the entity that we’re modelling, such as `CREATED_BOOKINGS`.
+
+By default, the macro will consider the following base dimensions as the expected ones:
+
+- `global`
+- `by_billing_country`
+- `by_number_of_listings`
+
+Generally, any model will also receive the `by_deal` dimension unless strictly removed in the macro configuration. Additional entity-specific dimensions can be configured for the aggregation. For instance, `GUEST_PAYMENTS` can receive both the 4 abovementioned dimension aggregations as well as `by_has_id_check` as it’s required for other purposes.
+
+Lastly, be aware that when creating a new dimension, you’d need to create a dedicated macro entry by the name of `dim_{name_of_your_dimension}`, that should provide 1) the dimension name to be used and 2) the field that contains the `dimension_value` used to compute the aggregation.
+
+# KPIs Products
+
+This is a summary of the Data Products that depend on the KPIs.
+
+## Main KPIs
+
+Reporting: [Main KPIs](https://app.powerbi.com/groups/me/apps/33e55130-3a65-4fe8-86f2-11979fb2258a/reports/5ceb1ad4-5b87-470b-806d-59ea0b8f2661/cabe954bba6d285c576f?experience=power-bi)
+
+Data Product page: [Business Overview Reporting Suite](https://www.notion.so/Business-Overview-Reporting-Suite-9e1662c7b9c042f3bd4c053364ba30ab?pvs=21)
+
+Computation flows:
+
+→ Note that these are shared within KPIs folder, and get split at cross level.
+
+- Name: `MTD + Monthly per category`
+ - Downstream tables:
+ - `cross/int_mtd_vs_previous_year_metrics`
+ - In turn, this depends on `cross/int_monthly_aggregated_metrics_history_by_deal` due to the computation of Churn Rate metrics, that are deal-dependant.
+ - `cross/int_mtd_aggregated_metrics`
+ - `general/mtd_aggregated_metrics`
+ - Time dimensions used:
+ - `Monthly` (depends on daily)
+ - `MTD` (depends on daily)
+ - Dimensions used:
+ - `global`
+ - `by_billing_country`
+ - `by_number_of_listings`
+ - Entities used:
+ - `Created Bookings`
+ - `Check Out Bookings`
+ - `Cancelled Bookings`
+ - `Billable Bookings`
+ - `Created Guest Journeys`
+ - `Started Guest Journeys`
+ - `Completed Guest Journeys`
+ - `Guest Journeys with Payment`
+ - `Guest Payments`
+ - `Invoiced Revenue`
+ - `Host Resolutions`
+ - `Listings`
+ - `Deals`
+ - Depends on:
+ - Flow: `Monthly by Deal`
+ - Table: `cross/int_monthly_aggregated_metrics_history_by_deal`
+- Name: `Monthly by Deal`
+ - Downstream tables:
+ - `cross/int_monthly_aggregated_metrics_history_by_deal`
+ - `general/monthly_aggregated_metrics_history_by_deal`
+ - Time dimensions used:
+ - `Monthly` (depends on daily)
+ - Dimensions used:
+ - `by_deal`
+ - Entities used:
+ - `Created Bookings`
+ - `Check Out Bookings`
+ - `Cancelled Bookings`
+ - `Billable Bookings`
+ - `Created Guest Journeys`
+ - `Started Guest Journeys`
+ - `Completed Guest Journeys`
+ - `Guest Journeys with Payment`
+ - `Guest Payments`
+ - `Invoiced Revenue`
+ - `Host Resolutions`
+ - `Listings`
+
+## Account Managers Reporting
+
+Reporting: [Account Managers Reporting](https://app.powerbi.com/groups/me/apps/bb1a782f-cccc-4427-ab1a-efc207d49b62/reports/797e7838-3119-4d0e-ace5-2026ec7b8c0e/cabe954bba6d285c576f?experience=power-bi)
+
+Data Product page: [Account Management Reporting Suite](https://www.notion.so/Account-Management-Reporting-Suite-13c0446ff9c980719656c20cae279937?pvs=21)
+
+Computation flows:
+
+- Name: `growth score by deal`
+ - Downstream tables:
+ - `cross/int_monthly_growth_score_by_deal`
+ - `general/monthly_growth_score_by_deal`
+ - Time dimensions used:
+ - `Monthly` (depends on daily)
+ - Dimensions used:
+ - `by_deal`
+ - Entities used:
+
+ → At this stage, uses the same as Monthly by Deal from Main KPIs. In terms of pure business sense, it would only use:
+
+ - `Created Bookings`
+ - `Guest Payments`
+ - `Invoiced Revenue`
+ - `Listings`
+ - Depends on:
+ - Flow: `Monthly by Deal` (Main KPIs)
+ - Table: `cross/int_monthly_aggregated_metrics_history_by_deal`
+- Name: `monthly aggregated metrics history by deal by time window`
+ - Downstream tables:
+ - `cross/int_monthly_aggregated_metrics_history_by_deal_by_time_window`
+ - `general/monthly_aggregated_metrics_history_by_deal_by_time_window`
+ - Time dimensions used:
+ - `Monthly` (depends on daily. It aggregates different months to generate larger aggregations, ex.: Previous 6 months).
+ - Dimensions used:
+ - `by_deal`
+ - Entities used:
+
+ → At this stage, uses the same as Monthly by Deal from Main KPIs. In terms of pure business sense, it would only use:
+
+ - `Created Bookings`
+ - `Guest Payments`
+ - `Invoiced Revenue`
+ - `Host Resolutions`
+ - `Listings`
+ - Depends on:
+ - Flow: `Monthly by Deal` (Main KPIs)
+ - Table: `cross/int_monthly_aggregated_metrics_history_by_deal`
\ No newline at end of file
diff --git a/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md:Zone.Identifier b/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/Technical Documentation - 2024-11-12 13c0446ff9c980719db3f4c420995f70.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md b/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md
new file mode 100644
index 0000000..2712bd3
--- /dev/null
+++ b/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md
@@ -0,0 +1,48 @@
+# VPN Set up
+
+# Data VPN
+
+Follow these instructions to set up the Data VPN. This will allow you access the DWH.
+
+1. Download Wireguard from the official webpage: [https://www.wireguard.com/](https://www.wireguard.com/) and install it on your device
+2. Ask Pablo for your config and to set up access for you on the server. Your config should look roughly like this:
+
+```bash
+[Interface]
+PrivateKey = +AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
+Address = 192.168.70.X/32
+DNS = 192.168.69.1
+[Peer]
+PublicKey = bKr79c5XbzudWeUjiwXcxsy1mrrEnrO4xSrNAUZv2GE=
+AllowedIPs = 192.168.69.1/32, 10.69.0.0/24, 52.146.133.0/24
+Endpoint = 172.166.88.95:52420
+```
+
+1. In Wireguard, click `Add Tunnel` and select `Add Empty Tunnel`
+
+
+
+1. Paste the config given by Pablo and add a name to the connection.
+2. Start the connection to test if it works. If it works, you should see the little green shield and also the `Transfer` section should show traffic in both the `received` and `sent`fields.
+
+
+
+1. You probably want to further test by connecting to some service within the Data subscription, like the DWH.
+
+# Backend (Core) VPN
+
+It’s likely the previous setup works for DWH, but not the backend. For the backend, follow this instructions:
+
+1. You will need to request the configuration file for the backend VPN. Ask someone in the Data Team or Ben Robinson
+2. In the Microsoft Store of your laptop, download Azure VPN Client and install it
+3. If asked, log in with your superhog/truvi account
+4. Once installed, on the bottom left corner, click the “+” button
+5. Click on Import and select the configuration file from the 1st step. That’s it.
+6. You probably want to further test by connecting to Live schema and run a simple query.
+
+# Notes
+
+- Don’t use your Private Key in more than 1 laptop at the same time or this might permanently block it.
+- It’s possible that having 2 VPNs active at the same time disallows certain access. Usually you will just need the Data VPN (Wireguard) turned on.
+ - Once you need to access the Backend, just turn off Data VPN (Wireguard) and turn on Backend VPN (Azure VPN Client).
+ - When you finished accessing the Backend, turn off the Backend VPN (Azure VPN Client) and turn on Data VPN (Wireguard).
\ No newline at end of file
diff --git a/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md:Zone.Identifier b/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/VPN Set up 01affb09a9f648fbad89b74444f920ca.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md b/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md
new file mode 100644
index 0000000..f3cabf1
--- /dev/null
+++ b/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md
@@ -0,0 +1,174 @@
+# dbt 1.7 to 1.9 upgrade
+
+On Jan ‘25, we set ourselves to upgrade our dbt project version. This page tracks the task.
+
+## Starting details
+
+On commit `04a10cf9c52ad849ef6f61b133e605efc813e33d`, we hold the following versions in our `requirements.txt` file:
+
+```sql
+dbt-core~=1.7.6
+dbt-postgres~=1.7.6
+```
+
+Furthermore, in the Airbyte production machine, we have these versions installed in the `venv` dedicated to dbt:
+
+```sql
+dbt-core==1.7.9
+dbt-postgres==1.7.9
+```
+
+## Goal
+
+To bump versions into the highest `1.9` patch, ensure everything works, provide instructions for all analysts and also inform on new features available.
+
+At the time of writing this, the highest `1.9` patch for dbt is `1.9.1` (https://github.com/dbt-labs/dbt-core/releases/tag/v1.9.1)
+
+As for the Postgres adapter, the most recent version is `1.9.0`.
+
+## Steps
+
+- [x] Backup `pip freeze` output of production dbt deployment.
+ - Output here
+
+ ```python
+ agate==1.7.1
+ annotated-types==0.6.0
+ attrs==23.2.0
+ Babel==2.14.0
+ certifi==2024.2.2
+ cffi==1.16.0
+ charset-normalizer==3.3.2
+ click==8.1.7
+ colorama==0.4.6
+ dbt-core==1.7.9
+ dbt-extractor==0.5.1
+ dbt-postgres==1.7.9
+ dbt-semantic-interfaces==0.4.4
+ idna==3.6
+ importlib-metadata==6.11.0
+ isodate==0.6.1
+ Jinja2==3.1.3
+ jsonschema==4.21.1
+ jsonschema-specifications==2023.12.1
+ leather==0.4.0
+ Logbook==1.5.3
+ MarkupSafe==2.1.5
+ mashumaro==3.12
+ minimal-snowplow-tracker==0.0.2
+ more-itertools==10.2.0
+ msgpack==1.0.8
+ networkx==3.2.1
+ packaging==23.2
+ parsedatetime==2.6
+ pathspec==0.11.2
+ protobuf==4.25.3
+ psycopg2-binary==2.9.9
+ pycparser==2.21
+ pydantic==2.6.3
+ pydantic_core==2.16.3
+ python-dateutil==2.9.0.post0
+ python-slugify==8.0.4
+ pytimeparse==1.1.8
+ pytz==2024.1
+ PyYAML==6.0.1
+ referencing==0.33.0
+ requests==2.31.0
+ rpds-py==0.18.0
+ six==1.16.0
+ sqlparse==0.4.4
+ text-unidecode==1.3
+ typing_extensions==4.10.0
+ urllib3==1.26.18
+ zipp==3.17.0
+ ```
+
+- [x] Upgrade package versions in production dbt deployment.
+ - New pip freeze here.
+
+ ```python
+ agate==1.7.1
+ annotated-types==0.6.0
+ attrs==23.2.0
+ Babel==2.14.0
+ certifi==2024.2.2
+ cffi==1.16.0
+ charset-normalizer==3.3.2
+ click==8.1.7
+ colorama==0.4.6
+ daff==1.3.46
+ dbt-adapters==1.13.0
+ dbt-common==1.14.0
+ dbt-core==1.9.1
+ dbt-extractor==0.5.1
+ dbt-postgres==1.9.0
+ dbt-semantic-interfaces==0.7.4
+ deepdiff==7.0.1
+ idna==3.6
+ importlib-metadata==6.11.0
+ isodate==0.6.1
+ Jinja2==3.1.3
+ jsonschema==4.21.1
+ jsonschema-specifications==2023.12.1
+ leather==0.4.0
+ Logbook==1.5.3
+ MarkupSafe==2.1.5
+ mashumaro==3.12
+ minimal-snowplow-tracker==0.0.2
+ more-itertools==10.2.0
+ msgpack==1.0.8
+ networkx==3.2.1
+ ordered-set==4.1.0
+ packaging==23.2
+ parsedatetime==2.6
+ pathspec==0.11.2
+ protobuf==5.29.2
+ psycopg2-binary==2.9.9
+ pycparser==2.21
+ pydantic==2.6.3
+ pydantic_core==2.16.3
+ python-dateutil==2.9.0.post0
+ python-slugify==8.0.4
+ pytimeparse==1.1.8
+ pytz==2024.1
+ PyYAML==6.0.1
+ referencing==0.33.0
+ requests==2.31.0
+ rpds-py==0.18.0
+ six==1.16.0
+ snowplow-tracker==1.0.4
+ sqlparse==0.5.3
+ text-unidecode==1.3
+ types-requests==2.32.0.20241016
+ typing_extensions==4.10.0
+ urllib3==2.3.0
+ zipp==3.17.0
+ ```
+
+- [x] Attempt to run our usual dbt run. Check if everything works and logs look good.
+ - [x] If shit hits the fan, rollback, study issues and go back to step 1. Do not continue down this list.
+ - Shit did hit the fan
+ - We started to get this error when running any dbt cli command: `ModuleNotFoundError: No module named 'dbt.adapters.factory'`
+ - We fixed it by applying this good gentleman’s advice: https://github.com/dbt-labs/dbt-core/issues/10135#issuecomment-2113728550
+- [x] If all is well, open PR to bump versions in git repo.
+ - PR here: https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/3970
+- [x] Create instructions for team to upgrade their local environments and make sure to communicate thoroughly, ask everyone to ACK back once done.
+ - Instructions below in this page.
+- [x] Make TLDR on cool features we have obtained and reference to docs for further detail.
+
+## Instructions for analysts
+
+Team, we’ve upgraded our version of `dbt` to 1.9. This is already applied in our production deployment, and [this PR](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/3970) is ready to apply it on the project level.
+
+We also need you to apply this version upgrade in your laptops so that versions are in sync across environments and stuff fits nicely. It’s very simple, you can find below the steps:
+
+- Open your VSCode workspace for the dbt project.
+- Open up a terminal and make sure it has the project virtual environment activated.
+- Make a backup of your python packages in case things go wrong: `pip freeze > my_packages_backup.txt`
+- Run the following sequence of commands to get things installed:
+ - `pip uninstall -y dbt-adapters`
+ - `pip install dbt-core==1.9.1 --upgrade`
+ - `pip install dbt-postgres==1.9.0 --upgrade`
+- To check that stuff works, just try to use dbt. You can begin with a humble `dbt --version`, which should show the new version that is installed. If that works fine, move into using dbt as usual in your local env.
+
+And that’s it! Welcome to dbt 1.9.
\ No newline at end of file
diff --git a/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md:Zone.Identifier b/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/dbt 1 7 to 1 9 upgrade 1740446ff9c98054915fd620df86339a.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md b/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md
new file mode 100644
index 0000000..e9fed99
--- /dev/null
+++ b/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md
@@ -0,0 +1,238 @@
+# dbt 1.9.1 to 1.9.8 upgrade
+
+On June ‘25, we set ourselves to upgrade our dbt project version. This page tracks the task.
+
+## Starting details
+
+On commit `04a10cf9c52ad849ef6f61b133e605efc813e33d`, we hold the following versions in our `requirements.txt` file:
+
+```bash
+dbt-core~=1.9.1
+dbt-postgres~=1.9.0
+```
+
+Furthermore, in the Airbyte production machine, we have these versions installed in the `venv` dedicated to dbt:
+
+```bash
+dbt-core==1.9.1
+dbt-postgres==1.9.0
+```
+
+Besides the contents of the `requirements.txt` file, this is the contents of the full output of `pip freeze` on the Airbyte machine’s dbt environment:
+
+```bash
+agate==1.7.1
+annotated-types==0.6.0
+attrs==23.2.0
+Babel==2.14.0
+certifi==2024.2.2
+cffi==1.16.0
+charset-normalizer==3.3.2
+click==8.1.7
+colorama==0.4.6
+daff==1.3.46
+dbt-adapters==1.13.0
+dbt-common==1.14.0
+dbt-core==1.9.1
+dbt-extractor==0.5.1
+dbt-postgres==1.9.0
+dbt-semantic-interfaces==0.7.4
+deepdiff==7.0.1
+idna==3.6
+importlib-metadata==6.11.0
+isodate==0.6.1
+Jinja2==3.1.3
+jsonschema==4.21.1
+jsonschema-specifications==2023.12.1
+leather==0.4.0
+Logbook==1.5.3
+MarkupSafe==2.1.5
+mashumaro==3.12
+minimal-snowplow-tracker==0.0.2
+more-itertools==10.2.0
+msgpack==1.0.8
+networkx==3.2.1
+ordered-set==4.1.0
+packaging==23.2
+parsedatetime==2.6
+pathspec==0.11.2
+protobuf==5.29.2
+psycopg2-binary==2.9.9
+pycparser==2.21
+pydantic==2.6.3
+pydantic_core==2.16.3
+python-dateutil==2.9.0.post0
+python-slugify==8.0.4
+pytimeparse==1.1.8
+pytz==2024.1
+PyYAML==6.0.1
+referencing==0.33.0
+requests==2.31.0
+rpds-py==0.18.0
+six==1.16.0
+snowplow-tracker==1.0.4
+sqlparse==0.5.3
+text-unidecode==1.3
+types-requests==2.32.0.20241016
+typing_extensions==4.10.0
+urllib3==2.3.0
+zipp==3.17.0
+```
+
+## Goal
+
+To bump versions into `1.9.8` patch, ensure everything works, provide instructions for all analysts and also inform on new features available. Here’s the release link in Github: https://github.com/dbt-labs/dbt-core/releases/tag/v1.9.8
+
+As for the Postgres adapter, the most recent version is still `1.9.0`, so there’s no need to upgrade anything.
+
+## Steps
+
+- [x] Backup `pip freeze` output of production dbt deployment.
+ - Output here
+
+ ```python
+ agate==1.7.1
+ annotated-types==0.6.0
+ attrs==23.2.0
+ Babel==2.14.0
+ certifi==2024.2.2
+ cffi==1.16.0
+ charset-normalizer==3.3.2
+ click==8.1.7
+ colorama==0.4.6
+ daff==1.3.46
+ dbt-adapters==1.13.0
+ dbt-common==1.14.0
+ dbt-core==1.9.1
+ dbt-extractor==0.5.1
+ dbt-postgres==1.9.0
+ dbt-semantic-interfaces==0.7.4
+ deepdiff==7.0.1
+ idna==3.6
+ importlib-metadata==6.11.0
+ isodate==0.6.1
+ Jinja2==3.1.3
+ jsonschema==4.21.1
+ jsonschema-specifications==2023.12.1
+ leather==0.4.0
+ Logbook==1.5.3
+ MarkupSafe==2.1.5
+ mashumaro==3.12
+ minimal-snowplow-tracker==0.0.2
+ more-itertools==10.2.0
+ msgpack==1.0.8
+ networkx==3.2.1
+ ordered-set==4.1.0
+ packaging==23.2
+ parsedatetime==2.6
+ pathspec==0.11.2
+ protobuf==5.29.2
+ psycopg2-binary==2.9.9
+ pycparser==2.21
+ pydantic==2.6.3
+ pydantic_core==2.16.3
+ python-dateutil==2.9.0.post0
+ python-slugify==8.0.4
+ pytimeparse==1.1.8
+ pytz==2024.1
+ PyYAML==6.0.1
+ referencing==0.33.0
+ requests==2.31.0
+ rpds-py==0.18.0
+ six==1.16.0
+ snowplow-tracker==1.0.4
+ sqlparse==0.5.3
+ text-unidecode==1.3
+ types-requests==2.32.0.20241016
+ typing_extensions==4.10.0
+ urllib3==2.3.0
+ zipp==3.17.0
+ ```
+
+- [x] Upgrade package versions in production dbt deployment with `pip install dbt-core==1.9.8 --upgrade`
+ - New pip freeze here.
+
+ ```python
+ agate==1.7.1
+ annotated-types==0.6.0
+ attrs==23.2.0
+ Babel==2.14.0
+ certifi==2024.2.2
+ cffi==1.16.0
+ charset-normalizer==3.3.2
+ click==8.1.7
+ colorama==0.4.6
+ daff==1.3.46
+ dbt-adapters==1.13.0
+ dbt-common==1.14.0
+ dbt-core==1.9.8
+ dbt-extractor==0.5.1
+ dbt-postgres==1.9.0
+ dbt-semantic-interfaces==0.7.4
+ deepdiff==7.0.1
+ idna==3.6
+ importlib-metadata==6.11.0
+ isodate==0.6.1
+ Jinja2==3.1.3
+ jsonschema==4.21.1
+ jsonschema-specifications==2023.12.1
+ leather==0.4.0
+ Logbook==1.5.3
+ MarkupSafe==2.1.5
+ mashumaro==3.12
+ minimal-snowplow-tracker==0.0.2
+ more-itertools==10.2.0
+ msgpack==1.0.8
+ networkx==3.2.1
+ ordered-set==4.1.0
+ packaging==23.2
+ parsedatetime==2.6
+ pathspec==0.11.2
+ protobuf==5.29.2
+ psycopg2-binary==2.9.9
+ pycparser==2.21
+ pydantic==2.6.3
+ pydantic_core==2.16.3
+ python-dateutil==2.9.0.post0
+ python-slugify==8.0.4
+ pytimeparse==1.1.8
+ pytz==2024.1
+ PyYAML==6.0.1
+ referencing==0.33.0
+ requests==2.31.0
+ rpds-py==0.18.0
+ six==1.16.0
+ snowplow-tracker==1.0.4
+ sqlparse==0.5.3
+ text-unidecode==1.3
+ types-requests==2.32.0.20241016
+ typing_extensions==4.10.0
+ urllib3==2.3.0
+ zipp==3.17.0
+ ```
+
+- [x] Attempt to run our usual dbt run. Check if everything works and logs look good.
+- [x] If all is well, open PR to bump versions in git repo.
+ - PR here: https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5455
+- [x] Create instructions for team to upgrade their local environments and make sure to communicate thoroughly, ask everyone to ACK back once done.
+ - Instructions below in this page.
+- [x] Make TLDR on cool features we have obtained and reference to docs for further detail.
+
+## Instructions for analysts
+
+Team, we’ve upgraded our version of `dbt` to `1.9.8`. This is already applied in our production deployment, and [this PR](https://guardhog.visualstudio.com/Data/_git/data-dwh-dbt-project/pullrequest/5455) is ready to apply it on the project level.
+
+We also need you to apply this version upgrade in your laptops so that versions are in sync across environments and stuff fits nicely. It’s very simple, you can find below the steps:
+
+- [ ] Open your VSCode workspace for the dbt project.
+- [ ] Open up a terminal and make sure it has the project virtual environment activated.
+- [ ] Make a backup of your python packages list in case things go wrong: `pip freeze > my_packages_backup.txt`
+- [ ] Run the following commands to get things installed `pip install dbt-core==1.9.8 --upgrade`
+- [ ] To check that stuff works, just try to use dbt. You can begin with a humble `dbt --version`, which should show the new version that is installed. If that works fine, move into using dbt as usual in your local env.
+
+Regarding new stuff after this update:
+
+- This upgrade doesn’t come with new features. Since we only moved patch versions, we only get bug fixes and performance improvements.
+- Having said that, this upgrade should fix the funny “dbt breaks if I make a new version of a model but I don’t delete the `target` folder “ bug that [caused this incident](20240913-01%20-%20dbt%20run%20blocked%20by%20%E2%80%9Cnot%20in%20the%20graph%201030446ff9c980c291f1d57751f443ee.md). So you should be able to not have to ever care again about deleting the `target` folder, and we can remove the systematic deletion that we had in our production scripts (I’ll take care of verifying the absence of the error and adjusting the scripts, no need to worry about that).
+
+And that’s it! Welcome to dbt `1.9.8`.
\ No newline at end of file
diff --git a/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md:Zone.Identifier b/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md:Zone.Identifier
new file mode 100644
index 0000000..73496f8
--- /dev/null
+++ b/notion_data_team_no_files/dbt 1 9 1 to 1 9 8 upgrade 2100446ff9c980cbaa01e84c22bdd13c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_data_team_no_files.zip
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md
new file mode 100644
index 0000000..45182ed
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md
@@ -0,0 +1,181 @@
+# Data Team Organisation
+
+# Data Vision
+
+
+
+# **Guiding Principles and Practices**
+
+## **📖 Transparency first**
+
+- We prioritize open and wide communication regarding current priorities and latest achievements. It is essential for everyone to understand whether we are working on a specific subject and, if not, why our efforts are focused elsewhere.
+- To facilitate this, we will provide a clear and visual representation of task prioritization. This approach ensures that while not all tasks have fixed deadlines, the evolving nature of our backlog and the potential for new high-priority requests are acknowledged and managed effectively.
+- Documentation can play a nice role in this area:
+ - **Data Catalogue:** A comprehensive index of our data sources, data products, and reports, ensuring everyone knows what data is available and how to access it.
+ - **Data News:** A weekly update detailing the latest developments and achievements of the Data team, keeping everyone informed of our progress and focus areas.
+ - **Data Papers:** A collection of ad-hoc, in-depth analyses. These papers aim to share valuable insights and prevent the unnecessary repetition of similar analyses.
+
+## **🏆 One team, one dream**
+
+- By collaborating: Encourage teamwork and cooperation across departments and teams.
+Foster an environment where everyone feels valued and their contributions are recognized.
+- By sharing goals: Common objectives that everyone works towards, ensuring that success is measured by the collective achievements rather than individual ones.
+- By granting mutual support: we promote a culture where team members support one another, share knowledge, and help each other overcome challenges. We encourage mentoring and peer-to-peer learning.
+
+### **🧭 We are Analytical Ambassadors**
+
+- We actively encourage the use of data and analytics in everyday decision-making processes, as well as we promote the benefits of data-driven decisions through workshops, training sessions, and regular communication.
+- We serve as evangelists by spreading awareness about the importance and advantages of using data across the organization.
+- We offer support and guidance on best practices for data collection, analysis, interpretation, A/B tests, etc.
+
+# **How do we collaborate with business teams?**
+
+## **Lines of Work**
+
+The Data team focuses on three primary lines of work, namely Maintenance (run), Projects (build), Ad-hoc Requests (business-oriented run).
+
+### **1. Maintenance (run)**
+
+- **Nature of work:** This involves the ongoing tasks necessary to ensure that all data systems and processes are functioning correctly. These tasks are reactive by nature and often arise unexpectedly.
+- **Tracking:** All maintenance work are tracked using a ticketing system in DevOps to keep a record of all work done.
+- **Time allocation**: No constraint, since usually other products will depend on it. Ideally, it should be low and part of the build should aim to reduce Maintenance time.
+- **Estimation and priority:** Given their unpredictable nature, these tasks are hard to estimate but are typically assigned top priority to minimize downtime and ensure seamless operation.
+- **Examples of tasks:** Fixing data pipeline issues, resolving system outages, addressing data discrepancies, and ensuring the accuracy and availability of critical reports.
+
+### **2. Projects (build)**
+
+- **Nature of work:** These involve long-term projects aimed at developing and enhancing mostly data products, but also data infrastructure, reporting capabilities, and workflows. These projects are strategic and contribute to the overall improvement of the data ecosystem.
+- **Tracking:** High level initiatives will be made available at ProductBoard, to mimic Product team. These will be linked to Epics or Features in DevOps, that will be further divided and refined into DevOps user stories.
+- **Time allocation:** No constraint. It should be the most important aspect after incident solving to ensure long-term growth capacity to the business.
+- **Estimation and priority:** While initial estimates can be made, committing to exact timelines can be risky due to the complexity and potential scope changes. Projects are planned with flexibility to accommodate evolving requirements following an Agile iterative approach (bring value fast, iterate afterwards).
+- **Examples of tasks:** Developing new data pipelines, creating comprehensive dashboards, implementing data governance frameworks, optimizing existing data processes, setting a new data quality alerting system, design and execution of A/B tests.
+
+### **3. Ad-hoc Requests (business-oriented run)**
+
+- **Nature of work:** These are short-term, unplanned tasks that arise from immediate business needs or questions. The tasks are often small in scope and can be completed quickly.
+- **Tracking:** All non-trivial ad-hoc requests are tracked using a ticketing system in DevOps to ensure they are addressed in a timely manner and to keep a record of all work done.
+- **Time allocation:** Each week, **a maximum of 10 hours** is allocated for ad-hoc tasks. The responsibility for these tasks rotates weekly between team members (for the moment, Pablo and Uri), designated as the ***Data Captain***.
+- **Estimation and priority:** These requests, by their nature, might not need to be groomed with the rest of the team. It is the responsibility of the Data Captain to decide if this request can or needs to be handled right away or it can wait depending on the criticality. Common sense and business intuition should prevail.
+- **Examples of tasks:** Generating quick reports, running specific data queries, and providing data insights for immediate business decisions.
+
+## **Demand In-take Process**
+
+The process of taking in and managing requests from business teams is structured to ensure efficiency and clarity:
+
+1. **Request Submission:**
+ - The requests need to be submitted in the [***#data channel***](https://superhogteam.slack.com/archives/C06GFGHJD7H), via a Slack bot named ***Data Request.*** This should be the primary tool for business teams to submit their data requests. This ensures that all requests are captured in a centralized and accessible manner.
+2. **Triaging Requests:**
+ - The ***Data Captain*** is responsible for reviewing and categorizing incoming requests daily. Each request is triaged into one of the three lines of work:
+ - **Maintenance:** Logged in the DevOps system to track ongoing support tasks.
+ - **Ad-hoc Requests:** Also logged in DevOps, with a cap of 10 hours per week for the Data Captain to address these. They should be tagged with the `Data Captain` tag.
+ - **Build / Projects:** Major requests are documented with a detailed business rationale, goals, and a high-level overview. These need to be discussed and refined. These are detailed on the product board. Afterwards, these will be broken down into actionable tasks with clear definitions of done, timelines, and business justifications into DevOps.
+
+## **Communication and Priority Setting**
+
+Effective communication and clear priority setting are crucial for aligning the Data team’s efforts with business objectives:
+
+1. **Communication with main Project Stakeholders:**
+ - Tactical meetings are held with stakeholders involved in build projects. These meetings are focused on gather requirements, high level design, progress updates, addressing any issues, and aligning on project goals and timelines.
+2. **Priority Setting:**
+ - We assume mostly 3 sources of prioritisation:
+ - For key, strategic decisions, quarterly high-level discussions are conducted with the TMT (Top Management Team).
+ - For important decisions, more regular communication should happen with the Matt Chetwood. We currently meet on a weekly basis.
+ - For the rest of decisions, the Data team will be autonomous.
+ - The first 2 types of discussions revolve around visualising the high-level product board, setting priorities based on business needs, and adjusting plans as necessary to ensure alignment with strategic objectives and ensuring a good team workload and capacity.
+
+# How do we collaborate internally within the Data team?
+
+Agile:
+
+- Go Kanban: [The Official Guide to The Kanban Method | Kanban University](https://kanban.university/kanban-guide/)
+- Use [AzureDevOps board](https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories). Main reason is to foster collaboration with tech teams if there’s dependencies with other Tech teams (we’ll be on the same tools)
+- Weeklies on Wednesdays
+- Every day we all complete a written update. Read more below.
+- Team Retrospectives (once a month)
+- We do not do grooming sessions for the moment since it would be a bit overkill, we do on demand if needed for the time being
+
+Communication:
+
+- Internal communication through [data-team-internal](https://superhogteam.slack.com/archives/C072W6QB3UJ) channel
+- [Data news](https://www.notion.so/Data-News-7dc6ee1465974e17b0898b41a353b461?pvs=21): weekly basis summary towards everyone
+- Data Alerts for issues in our systems that need someone to look at.
+
+Documentation:
+
+- DWH + Reports/Data Products: → dbt model
+- On our tools and infra: do check the Azure Devops Git repositories
+- Anything else: here in Notion
+
+## Usage of board
+
+We have certain standards on how do we use [our Kanban task board](https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team):
+
+- Individual cards should be **obviously** doable in less than a week. If it isn’t, break it down into smaller chunks. (*Note that sometimes something will last more than a week because it is blocked by external factors. That’s fine).*
+- Cards should provide in their description:
+ - Context (why are we doing this? what’s the story behind it?).
+ - As much detail as it makes sense on the task itself.
+ - Note that, if you find yourself adding a wall of text, you might want to judge offloading the deep detail into Notion.
+ - Related info (links to conversations, docs, deeper explanations, etc).
+ - Accurate description on what needs to be achieved, including a definition of what does it mean for the task to be done in the Acceptance Criteria area.
+ - As a reference, someone in the team should be able to pick the card and completely make sense out of it without asking you anything.
+- Progress should be logged in the cards: update on a regular basis as you advance, encounter issues, further document, pivot because of new info, etc. How much? You judge. When in doubt, err on the side of over-logging.
+ - As a reference, someone in the team should be able to pick the card while it’s WIP and continue where you left out without asking you anything.
+ - Also, make sure the card is in the right lane and move it always as needed.
+- When completing a card, please do leave a comment confirming that everything was done and add any info that might be relevant for future reference. The fact that the task was completed and what exactly was completed should be obvious to a reader.
+- All cards should belong into the right epic. If it doesn’t have an epic, you either need to start an Epic or add it to the quarterly Unplanned epic. You decide what makes the most sense.
+- Any Testing Data Alert that is not trivial should get a card. As a rule of thumb, if you have already invested already at least 5min in an alert and it’s still not solved, it is not trivial.
+
+## Weekly meeting
+
+We use our weekly meeting as our synchronous touchpoint to organize work and discuss whatever we need to.
+
+The default agenda for this session is:
+
+- Get a quick summary from each member on what was achieved since the last weekly.
+- Discuss adhoc topics (whatever needs to be discussed in the team on a weekly basis)
+- Coffee break after the previous, which shouldn’t take more than 30 min.
+- Plan scopes for next week
+ - We decide which tasks each of us will go for, committing for an amount of work that we feel is doable within a week
+ - Note that it is expected that we all work on grooming the board continuously before this session so that we don’t need to define work to be done at this time.
+
+You might wonder: what happens if I finish my weekly planned work before the next weekly? If this is the case (and hopefully it should be relatively often), feel free to continue working on whatever you think is most important. This is typically and ideal moment for those pesky refactors, cleanups, docs improvements, environment setups and other important but not urgent tasks that struggle to make their way into your agenda.
+
+## Daily Written Update
+
+Every day, the team members write their Daily Written Update in the slack channel [#daily-written-update](https://teamtruvi.slack.com/archives/C0911ANQ7QS). In the update, we briefly answer the questions:
+
+> *What did I do yesterday?*
+>
+>
+> *What will I do today?*
+>
+> *Is there anything blocking my tasks? If so, what is it?*
+>
+
+It is expected to complete these everyday before 10:00.
+
+These updates allow the team to stay up to date with each other’s status without needing synchronous meetings. It also helps us compose our Data News and longer-term reflections on what we have accomplished.
+
+## Data Captain duties
+
+Each week, a member of the team holds the Data Captain role. The Data Captain acts as a first line of contact for our colleagues in other teams and is also tasked with some regular team tasks. Specifically, the Data Captain:
+
+- Sends the Data News on Monday morning.
+ - Note that the Captain does not write all the news. It is up to the team members to write their updates there.
+- Triages any Data Testing alerts, documents them as a card in our board and sends them to their owner (which can be himself, of course).
+- Handles any PBI app access requests.
+- Sends the Business Targets weekly update to #all-staff on Fridays. [See this one by Uri as an example](https://teamtruvi.slack.com/archives/C6CKB771Q/p1749194573318649).
+
+# Incident Management
+
+- We make our best effort to build postmortem reports to document and reflect on any [incidents that take place in our systems](https://www.notion.so/Incident-Management-4829884213d744d4884be6c53988e696?pvs=21). You can find [our template here](https://www.notion.so/20241104-01-Booking-invoicing-incident-due-to-bulk-UpdatedDate-change-82f0fde01b83440e8b2d2bd6839d7c77?pvs=21), and [the list of incidents here](https://www.notion.so/Incident-Reports-9cdecb44c3914d24a0075ca1e8958fbf?pvs=21).
+- Any ongoing issues that are impacting business users or other dependants of the Data team should be announced in the `#data` slack channel. If the incident’s affect audience can be narrowed down to specific groups of users, please also try to notify them more directly to ensure they are aware of the situation.
+
+Subpages
+
+[Data Team Organisation - Season 1](Data%20Team%20Organisation%2081ea09a1778c4ca2ab39e7f221730cb5/Data%20Team%20Organisation%20-%20Season%201%2020f0446ff9c980d78a0bd614930586a3.md)
+
+[Season 2 Proposal](Data%20Team%20Organisation%2081ea09a1778c4ca2ab39e7f221730cb5/Season%202%20Proposal%2020e0446ff9c9805ca36bfa696e9e319c.md)
\ No newline at end of file
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md:Zone.Identifier b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md:Zone.Identifier
new file mode 100644
index 0000000..4eb97da
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_team_organization.zip
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md
new file mode 100644
index 0000000..6ee7a10
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md
@@ -0,0 +1,154 @@
+# Data Team Organisation - Season 1
+
+
+
+# Data Vision
+
+
+
+# **Guiding Principles and Practices**
+
+## **📖 Transparency first**
+
+- We prioritize open and wide communication regarding current priorities and latest achievements. It is essential for everyone to understand whether we are working on a specific subject and, if not, why our efforts are focused elsewhere.
+- To facilitate this, we will provide a clear and visual representation of task prioritization. This approach ensures that while not all tasks have fixed deadlines, the evolving nature of our backlog and the potential for new high-priority requests are acknowledged and managed effectively.
+- Documentation can play a nice role in this area:
+ - **Data Catalogue:** A comprehensive index of our data sources, data products, and reports, ensuring everyone knows what data is available and how to access it.
+ - **Data News:** A weekly update detailing the latest developments and achievements of the Data team, keeping everyone informed of our progress and focus areas.
+ - **Data Papers:** A collection of ad-hoc, in-depth analyses. These papers aim to share valuable insights and prevent the unnecessary repetition of similar analyses.
+
+## **🏆 One team, one dream**
+
+- By collaborating: Encourage teamwork and cooperation across departments and teams.
+Foster an environment where everyone feels valued and their contributions are recognized.
+- By sharing goals: Common objectives that everyone works towards, ensuring that success is measured by the collective achievements rather than individual ones.
+- By granting mutual support: we promote a culture where team members support one another, share knowledge, and help each other overcome challenges. We encourage mentoring and peer-to-peer learning.
+
+### **🧭 We are Analytical Ambassadors**
+
+- We actively encourage the use of data and analytics in everyday decision-making processes, as well as we promote the benefits of data-driven decisions through workshops, training sessions, and regular communication.
+- We serve as evangelists by spreading awareness about the importance and advantages of using data across the organization.
+- We offer support and guidance on best practices for data collection, analysis, interpretation, A/B tests, etc.
+
+# **How do we collaborate with business teams?**
+
+## **Lines of Work**
+
+The Data team focuses on three primary lines of work, namely Maintenance (run), Projects (build), Ad-hoc Requests (business-oriented run).
+
+### **1. Maintenance (run)**
+
+- **Nature of work:** This involves the ongoing tasks necessary to ensure that all data systems and processes are functioning correctly. These tasks are reactive by nature and often arise unexpectedly.
+- **Tracking:** All maintenance work are tracked using a ticketing system in DevOps to keep a record of all work done.
+- **Time allocation**: No constraint, since usually other products will depend on it. Ideally, it should be low and part of the build should aim to reduce Maintenance time.
+- **Estimation and priority:** Given their unpredictable nature, these tasks are hard to estimate but are typically assigned top priority to minimize downtime and ensure seamless operation.
+- **Examples of tasks:** Fixing data pipeline issues, resolving system outages, addressing data discrepancies, and ensuring the accuracy and availability of critical reports.
+
+### **2. Projects (build)**
+
+- **Nature of work:** These involve long-term projects aimed at developing and enhancing mostly data products, but also data infrastructure, reporting capabilities, and workflows. These projects are strategic and contribute to the overall improvement of the data ecosystem.
+- **Tracking:** High level initiatives will be made available at ProductBoard, to mimic Product team. These will be linked to Epics or Features in DevOps, that will be further divided and refined into DevOps user stories.
+- **Time allocation:** No constraint. It should be the most important aspect after incident solving to ensure long-term growth capacity to the business.
+- **Estimation and priority:** While initial estimates can be made, committing to exact timelines can be risky due to the complexity and potential scope changes. Projects are planned with flexibility to accommodate evolving requirements following an Agile iterative approach (bring value fast, iterate afterwards).
+- **Examples of tasks:** Developing new data pipelines, creating comprehensive dashboards, implementing data governance frameworks, optimizing existing data processes, setting a new data quality alerting system, design and execution of A/B tests.
+
+### **3. Ad-hoc Requests (business-oriented run)**
+
+- **Nature of work:** These are short-term, unplanned tasks that arise from immediate business needs or questions. The tasks are often small in scope and can be completed quickly.
+- **Tracking:** All non-trivial ad-hoc requests are tracked using a ticketing system in DevOps to ensure they are addressed in a timely manner and to keep a record of all work done.
+- **Time allocation:** Each week, **a maximum of 10 hours** is allocated for ad-hoc tasks. The responsibility for these tasks rotates weekly between team members (for the moment, Pablo and Uri), designated as the ***Data Captain***.
+- **Estimation and priority:** These requests, by their nature, might not need to be groomed with the rest of the team. It is the responsibility of the Data Captain to decide if this request can or needs to be handled right away or it can wait depending on the criticality. Common sense and business intuition should prevail.
+- **Examples of tasks:** Generating quick reports, running specific data queries, and providing data insights for immediate business decisions.
+
+## **Demand In-take Process**
+
+The process of taking in and managing requests from business teams is structured to ensure efficiency and clarity:
+
+1. **Request Submission:**
+ - The requests need to be submitted in the [***#data channel***](https://superhogteam.slack.com/archives/C06GFGHJD7H), via a Slack bot named ***Data Request.*** This should be the primary tool for business teams to submit their data requests. This ensures that all requests are captured in a centralized and accessible manner.
+2. **Triaging Requests:**
+ - The ***Data Captain*** is responsible for reviewing and categorizing incoming requests daily. Each request is triaged into one of the three lines of work:
+ - **Maintenance:** Logged in the DevOps system to track ongoing support tasks.
+ - **Ad-hoc Requests:** Also logged in DevOps, with a cap of 10 hours per week for the Data Captain to address these.
+ - **Build / Projects:** Major requests are documented with a detailed business rationale, goals, and a high-level overview. These need to be discussed and refined. These are detailed on the product board. Afterwards, these will be broken down into actionable tasks with clear definitions of done, timelines, and business justifications into DevOps.
+
+## **Communication and Priority Setting**
+
+Effective communication and clear priority setting are crucial for aligning the Data team’s efforts with business objectives:
+
+1. **Communication with main Project Stakeholders:**
+ - Tactical meetings are held with stakeholders involved in build projects. These meetings are focused on gather requirements, high level design, progress updates, addressing any issues, and aligning on project goals and timelines.
+2. **Priority Setting:**
+ - We assume mostly 3 sources of prioritisation:
+ - For key, strategic decisions, quarterly high-level discussions are conducted with the TMT (Top Management Team).
+ - For important decisions, more regular communication should happen with the Head of Product (Ben) and Finance Director (Suzannah).
+ - For the rest of decisions, the Data team will be autonomous.
+ - The first 2 types of discussions revolve around visualising the high-level product board, setting priorities based on business needs, and adjusting plans as necessary to ensure alignment with strategic objectives and ensuring a good team workload and capacity.
+
+# How do we collaborate internally within the Data team?
+
+Agile:
+
+- Go Kanban: [The Official Guide to The Kanban Method | Kanban University](https://kanban.university/kanban-guide/)
+- Use [AzureDevOps board](https://guardhog.visualstudio.com/Data/_boards/board/t/Data%20Team/Stories). Main reason is to foster collaboration with tech teams if there’s dependencies with other Tech teams (we’ll be on the same tools)
+- Dailies every morning to comment on updates (mostly following Kanban board)
+- Team Retrospectives (once a month)
+ - Both internal and with customers
+- We do not do grooming sessions for the moment since it would be a bit overkill, we do on demand if needed for the time being
+
+Communication:
+
+- Internal communication through [data-team-internal](https://superhogteam.slack.com/archives/C072W6QB3UJ) channel
+- [Data news](https://www.notion.so/Data-News-7dc6ee1465974e17b0898b41a353b461?pvs=21): weekly basis summary towards everyone
+
+Documentation:
+
+- DWH + Reports/Data Products: → dbt model
+- Notion
+ - Data Catalog
+ - Internal docs
+ - Everything else
+- Infra
+ - git repo
+
+Miscellaneous:
+
+- Are we going to solve all requests? no. We’ll check first if it makes sense from a Data POV, considering the value of the actionable next step and the complexity of the request itself.
+
+# How do we collaborate with the tech team?
+
+
+
+- Data contracts
+- PR notification
+- Alignment on data models during design phases
+- Regular communications:
+ - Regular meetings Data x Tech (every 2 weeks?)
+ - Dedicated channel to discuss tech subjects? Might be a bit overkill with the current org-size
+
+
+# Request handling
+
+:WIP:
+
+# Incident Management
+
+- We make our best effort to build postmortem reports to document and reflect on any [incidents that take place in our systems](https://www.notion.so/Incident-Management-4829884213d744d4884be6c53988e696?pvs=21). You can find [our template here](https://www.notion.so/20241104-01-Booking-invoicing-incident-due-to-bulk-UpdatedDate-change-82f0fde01b83440e8b2d2bd6839d7c77?pvs=21), and [the list of incidents here](https://www.notion.so/Incident-Reports-9cdecb44c3914d24a0075ca1e8958fbf?pvs=21).
+- Any ongoing issues that are impacting business users or other dependants of the Data team should be announced in the `#data` slack channel. If the incident’s affect audience can be narrowed down to specific groups of users, please also try to notify them more directly to ensure they are aware of the situation.
+
+[Season 2 Proposal](Data%20Team%20Organisation%20-%20Season%201%2020f0446ff9c980d78a0bd614930586a3/Season%202%20Proposal%2020f0446ff9c98185b0c8f43ce26685bc.md)
+
+In `sh_invoicing_exporter`, we
\ No newline at end of file
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md:Zone.Identifier b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md:Zone.Identifier
new file mode 100644
index 0000000..4eb97da
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_team_organization.zip
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md
new file mode 100644
index 0000000..4867b9d
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md
@@ -0,0 +1,83 @@
+# Season 2 Proposal
+
+This is a proposal on June ‘25 to change some of the foundational ways of working that the Data Team agreed around May ‘24. We call this second year of the team the Season 2.
+
+# Motivation
+
+About a year ago, we started rolling as a team. It was a time of uncertainty and new beginnings.
+
+After an initial effort to define some constraints on how we wanted to work, and with some months of rolling together, we cam to find our own way of doing things. Just like the milky way went from a clump of shapeless gas and dust to a bunch of solid objects, we went from unorganized capital into a well oiled team.
+
+Our ways of working that we designed and polished since then have served us well… **but today we’re in a Truvi that is not in like the Superhog we were born in!**
+
+Since we started out as a team, many things have changed. Our colleagues have changed. Our customers have changed. Our systems have changed. We’ve probably changed too. This being the case, I thought to myself it made sense to look back at how we’re doing things with some critical eye to try to adapt to where we are today.
+
+Finally, there’s a second reason I think it’s healthy to propose the changes you’ll see below: any pattern, repeated enough times, becomes boring and dull. One can only take so much of doing the same motions over and over again before becoming desensitized of it. I believing mixing things up and trying to get the same output with a different approach will help in keeping us entertained and engaged.
+
+# What remains unchanged
+
+Even though I will propose a few changes, I believe some of the things we do are just perfectly fine how we are and I see little value in changing them. Since you’re familiar with them, I won’t describe them much and just list them out:
+
+- Using Kanban to track work
+- Running our monthly team retros
+- The figure of the Data Captain as the colleague responsible for triaging requests and running weekly tasks
+ - Triaging Data Requests stays as is
+- Having Uri and Pablo manage the interaction with the wider org.
+
+# What I propose we change
+
+- Formalize more Data Captain
+ - Testing Data Alerts
+ - Triage (check what happened)
+ - Send to “owner”
+ - Outage Data Alerts
+ - Non-data-captain, all hands decks
+ - Data News
+ - Sending
+ - Not writing all updates
+ - Send targets
+ - PBI Access requests
+- More async
+ - Harder push for docs
+ - Use board as regular log for work
+ - Standards
+ - When creating
+ - Context (why is this ticket here)
+ - Related docs, links, etc.
+ - Acceptance criteria (what is the DoD)
+ - When working
+ - Log steps as we progress
+ - Keep in the right lane in the board
+ - When closing
+ - Final notes with outcomes + links to docs/PRs/relevant thingies
+ - Always keep in the right epic. If no epic, unplanned.
+ - Written-standup
+ - Every day before 10AM.
+ - Dedicated slack channel
+ - Template
+ - Yesterday I …
+ - Today I will …
+ - I’m blocked/Need a hand with…
+ - Replace dailies with weeklies
+ - After sync with Matt on Wednesdays
+ - Quick summary of previous weeks
+ - Offtopic-ish
+ - Break
+ - Planning (trying to commit to certain scope)
+ - 90 min with break?
+ - Harder enforcement of quiet days (Tuesdays and Thursdays for sure. Also Fridays?)
+ - Internally
+ - And externally
+ - Reversely, we should agree to surrender Wednesday and Monday as meeting days.
+- More independence in the team for setting tasks
+ - Epics¿?
+ - Quarterly goals, people can craft their own tasks in alignment with that
+
+# Next steps
+
+- [ ] Improve UX on testing alerts (send reports with alert)
+- [ ] Document board usage standards
+- [ ] Document written standup standards and create slack channel and share template
+- [ ] Document weeklies + set them up + destroy dailies
+- [ ] Plan Q3 priorities list + epics set up
+- [ ]
\ No newline at end of file
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md:Zone.Identifier b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md:Zone.Identifier
new file mode 100644
index 0000000..4eb97da
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Data Team Organisation - Season 1 20f0446ff9c980d78a0bd614930586a3/Season 2 Proposal 20f0446ff9c98185b0c8f43ce26685bc.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_team_organization.zip
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md
new file mode 100644
index 0000000..c79bd2d
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md
@@ -0,0 +1,82 @@
+# Season 2 Proposal
+
+This is a proposal on June ‘25 to change some of the foundational ways of working that the Data Team agreed around May ‘24. We call this second year of the team the Season 2.
+
+# Motivation
+
+About a year ago, we started rolling as a team. It was a time of uncertainty and new beginnings.
+
+After an initial effort to define some constraints on how we wanted to work, and with some months of rolling together, we cam to find our own way of doing things. Just like the milky way went from a clump of shapeless gas and dust to a bunch of solid objects, we went from unorganized capital into a well oiled team.
+
+Our ways of working that we designed and polished since then have served us well… **but today we’re in a Truvi that is not in like the Superhog we were born in!**
+
+Since we started out as a team, many things have changed. Our colleagues have changed. Our customers have changed. Our systems have changed. We’ve probably changed too. This being the case, I thought to myself it made sense to look back at how we’re doing things with some critical eye to try to adapt to where we are today.
+
+Finally, there’s a second reason I think it’s healthy to propose the changes you’ll see below: any pattern, repeated enough times, becomes boring and dull. One can only take so much of doing the same motions over and over again before becoming desensitized of it. I believing mixing things up and trying to get the same output with a different approach will help in keeping us entertained and engaged.
+
+# What remains unchanged
+
+Even though I will propose a few changes, I believe some of the things we do are just perfectly fine how we are and I see little value in changing them. Since you’re familiar with them, I won’t describe them much and just list them out:
+
+- Using Kanban to track work
+- Running our monthly team retros
+- The figure of the Data Captain as the colleague responsible for triaging requests and running weekly tasks
+ - Triaging Data Requests stays as is
+- Having Uri and Pablo manage the interaction with the wider org.
+
+# What I propose we change
+
+- Formalize more Data Captain
+ - Testing Data Alerts
+ - Triage (check what happened)
+ - Send to “owner”
+ - Outage Data Alerts
+ - Non-data-captain, all hands decks
+ - Data News
+ - Sending
+ - Not writing all updates
+ - Send targets
+ - PBI Access requests
+- More async
+ - Harder push for docs
+ - Use board as regular log for work
+ - Standards
+ - When creating
+ - Context (why is this ticket here)
+ - Related docs, links, etc.
+ - Acceptance criteria (what is the DoD)
+ - When working
+ - Log steps as we progress
+ - Keep in the right lane in the board
+ - When closing
+ - Final notes with outcomes + links to docs/PRs/relevant thingies
+ - Always keep in the right epic. If no epic, unplanned.
+ - Written-standup
+ - Every day before 10AM.
+ - Dedicated slack channel
+ - Template
+ - Yesterday I …
+ - Today I will …
+ - I’m blocked/Need a hand with…
+ - Replace dailies with weeklies
+ - After sync with Matt on Wednesdays
+ - Quick summary of previous weeks
+ - Offtopic-ish
+ - Break
+ - Planning (trying to commit to certain scope)
+ - 90 min with break?
+ - Harder enforcement of quiet days (Tuesdays and Thursdays for sure. Also Fridays?)
+ - Internally
+ - And externally
+ - Reversely, we should agree to surrender Wednesday and Monday as meeting days.
+- More independence in the team for setting tasks
+ - Epics¿?
+ - Quarterly goals, people can craft their own tasks in alignment with that
+
+# Next steps
+
+- [ ] Improve UX on testing alerts (send reports with alert)
+- [x] Document board usage standards
+- [x] Document written standup standards and create slack channel and share template
+- [x] Document weeklies + set them up + destroy dailies
+- [ ] Plan Q3 priorities list + epics set up
\ No newline at end of file
diff --git a/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md:Zone.Identifier b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md:Zone.Identifier
new file mode 100644
index 0000000..4eb97da
--- /dev/null
+++ b/notion_team_organization/Private & Shared/Data Team Organisation 81ea09a1778c4ca2ab39e7f221730cb5/Season 2 Proposal 20e0446ff9c9805ca36bfa696e9e319c.md:Zone.Identifier
@@ -0,0 +1,3 @@
+[ZoneTransfer]
+ZoneId=3
+ReferrerUrl=C:\Users\PabloMartn\Downloads\notion_team_organization.zip