Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Grai (YC S22) – Open-Source Data Observability Platform
101 points by ersatz_username 9 months ago | hide | past | favorite | 44 comments
Hi HN, my name is Ian. My co-founder Edward and I started Grai (https://grai.io), an open-source data observability platform. It helps prevent production data outages by evaluating changes to your data pipelines in CI, rather than at runtime.

Ever experienced a production outage due to changes in upstream data sources? That's a problem we regularly encountered whether deploying machine learning or keeping a datawarehouse operational and it led us to create Grai.

Systematically testing the impact of data changes on the rest of your stack turns out to be quite difficult when the same data is copied and used across many different services and applications. Simple changes like renaming a column in a database can result in broken BI dashboards, incorrect training data for ML models, and data pipeline failure. For example, business users regularly deal with questions like "why does revenue look different in different dashboards".

These sort of problems are commonly dealt with by passively monitoring application execution logs for anomalies that might indicate an outage. Our goal was to move that task out of runtime where an outage has already occurred back into testing.

At its core, Grai is a graph of the relationships between the data in your organization, from columns in a database to JSON fields in an API. This graph allows Grai to analyze the downstream impact of proposed changes during CI and before they go live.

It includes a variety of pre-built integrations with common data tools such as PostgreSQL, Snowflake, dbt, and Fivetran, which automatically extract metadata and synchronize the state of your graph. It's built on a flexible data model backed by REST and GraphQL APIs and a Python client library. This way, users can directly build on top of Grai as they see fit. For example, because every object in Grai serializes to a yaml definition file, sort of like a CRD in Kubernetes, even if a pre-built integration doesn't exist it's fairly easy to manually create or script a custom solution.

We made the decision to build open-source from the beginning in part because we believe lineage is underutilized both organizationally and technologically. We hope to provide a foundation for the community to build cool concepts on top and have already had companies come to us with amazing ideas, like optimizing their real-time query pipelines to take advantage of spot price arbitrage between cloud and on-prem.

We try not to be overly opinionated about how organizations work, so whether you maintain a development database or run service containers in GitHub Actions it doesn't really matter. When your tests are triggered we evaluate the new state of the environment and check for any impacts, before reporting back as a comment in the pull request.

Data observability can have unexpected benefits. One of our customers uses us because we make on-boarding new engineers easier. Because we render an infinitely zoomable Figma-like graph of the entire data stack it's possible for them to visually explore end-to-end data flows and application dependencies.

You can find a quick demo here: https://vimeo.com/824026569, we've also put together an example getting started guide if you want to try things out yourself: https://docs.grai.io/examples/enhanced-dbt. Since everything is open source, you can always explore the code (https://github.com/grai-io/grai-core) and docs (https://docs.grai.io), where we have example deployment configurations for docker-compose and Kubernetes.

We would love to hear your feedback. If there's a feature we're missing, we'll build it. If you have a UX or developer experience suggestion, we'll fix it. If it's something else, we want to hear about it. We can't wait to hear your feedback and thank you in advance!




How do you guys do the static analysis on the queries? I notice you support dbt, bigquery etc, but all of our companies pipelines are in airflow. That makes the static analysis difficult because we're dealing with arbitrary python code that programmatically generates queries :).

Any plans to support airflow in the future? Would love to have something like this for our companies 500k+ airflow jobs.


It depends a bit on your stack. Out of the box it does a lot with the metadata produced by the tools your using. With something like dbt we can do things like extract your test assertions while for postgres we might use database constraints.

More generally we can embed the transformation logic of each stage of your data pipelines into the edge between nodes (like two columns). Like you said, in the case of SQL there are lots of ways to statically analyze that pipeline but it becomes much more complicated with something like pure python.

As an intermediate solution you can manually curate data contracts or assertions about application behavior into Grai but these inevitably fall out of sync with the code.

Airflow has a really great API for exposing task level lineage but we've held off integrating it because we weren't sure how to convert that into robust column or field level lineage as well. How are y'all handling testing / observability at the moment?


For testing:

- we have a dedicated dev environment for analysts to experience a dev/test loop. None of the pipelines can be run locally unfortunately.

- we have CI jobs and unit tests that are run on all pipelines

Observability:

- we have data quality checks for each dataset, organized by tier. This also integrates with our alerting system to send pagers when data quality dips.

- Airflow and our query engines hive/spark/presto each integrate with our in-house lineage service. We have a lineage graph that shows which pipelines produce/consume which assets but it doesn't work at the column level because our internal version of Hive doesn't support that.

- we have a service that essential surfaces observability metrics for pipelines in a nice ui

- our airflow is integrated with pagerduty to send pagers to owning teams when pipelines fail.

We'd like to do more, but nobody has really put in the work to make a good static analysis system for airflow/python. Couple that with the lack of support for column level lineage OOTB and it's easy to get into a mess. For large migrations (airflow/infra/python/dependecy changes) we still end up doing adhoc analysis to make sure things go right, and we often miss important things.

Happy to talk more about this if you're interested.


Hey, I like this project and will write it down to show it to superiors AtOnePoint™. Looks well done.

If you allow me a remark on the website: it requires JS from 8 separate domains to show content which is fine but I know that more technical readers can be sensitive to these aspects. Secondly, the browser addon DarkReader doesn't work well with the website so I had to turn it off and could only browse it in light mode.

Perhaps these could be actionable points for the future.

Good job and keep going!


Shoot! I wasn't familiar with DarkReader but I just created a ticket to see if we can get it fixed. We recently redid the website and there's still plenty of room for improvement. Thanks for pointing that out :).


Interesting.

I have experienced "devs changed something and it broke reporting" more times than I count. Typically the reason boils down to 1) they don't care, 2) their management doesn't care. That has always felt like an insurmountable cultural problem, but I do wonder, if a bot posted a PR comment on breaking changes before they got deployed, might that move the needle, just a little?


I think the space of DQ is pretty competitive.

Theres datafold [1], databend (acquired by ibm), atlan, greatexpectations to name a few, doing very similar things.

Just looking at the video I could not figure out what's differentiated. I hope you have success in the space.

[1] https://www.datafold.com


A lot of these tools are very, very different from each other so it's hard to address each individually. Just by way of example, databend is a full on datawarehouse while greatexpectations is a testing framework evaluating data assertions (i.e. "I see there are nulls, but you wrote a test which says there shouldn't be").

Here are some things we think are really important though

1. Data quality testing ideally happens during CI not after merge.

2. Developers come first. Virtually every aspect of the tool can be customized, modified, and extended down to the basic data model without changing any upstream core code. Want to build your own custom application on top of your data lineage? Great! Have at it!

3. Users should be able to own not just their own data but their own metadata. We go to great lengths to maintain feature parity between the cloud and self-hosted application.


My bad, I was actually referring to databand [1]

[1] https://databand.ai/


The license chosen [1] (Elastic License 2.0) is one that isn't considered open source by many, due to not being OSD [2] compatible. Were you aware of this before marketing as open source and, out of interest, does the license & usage of "open source" come into conversation when going through the YC process?

[1] https://github.com/grai-io/grai-core/blob/master/LICENSE [2] https://opensource.org/osd/


Indeed, it's "source available" at best, not open source as it limits how other parties can use the software, even if the creators don't like their use.


Just to be clear, the only limitation imposed by the license is preventing someone from reselling a cloud hosted copy of the tool. The code is otherwise totally free to use fork / modify / etc...


That's great, it's not open source though so you shouldn't call it open source. Call it something else.


We are pretty open to feedback on licensing and have gone back and forth internally because, frankly, we'd rather use a copy-left license.

We believe a project like this needs financial backing and a dedicated team driving development along but therein lies the tension. The common monetization paths either feature-lock critical self-hosted capabilities like SSO behind a paywall and/or monetize behind a cloud hosted option.

The Elastic license is an attempt to maintain feature parity between the cloud and self-hosted tool while still being protected from something like the big cloud providers ripping the code off altogether.

In all seriousness though, we would love to hear suggestions if you think there's a better path.


I personally don't have anything against the license you've chosen, and I respect your right to protect your efforts against usage you don't desire. I just think it's better to avoid using "open source" if going down the ELv2 path, and using something like "source available" or "fair code" instead to prevent confusion in misrepresenting this as, what is commonly considered, open source.

If you'd like further detail in regards to why I (and others) think this matters, I've previously written my thoughts up here: https://danb.me/blog/posts/why-open-source-term-is-important...


Thanks for the link. Some personal thoughts:

I think the effort to standardize what is meant by a term like "open source" is generally good, but I also think the meaning of language is always up for debate, and the OSI's definitions are only right if they are useful.

Of the two clauses you pulled out of the EL2 license, the first one - "You may not provide the software to third parties as a hosted or managed service ..." - seems fine to me as "open source", while the second - "You may not move, change, disable, or circumvent the license key functionality ..." - seems not-fine.

(So for what it's worth, because of that second clause, I am agreeing with you that this license shouldn't be called "open source" - but it seems unfortunate for OP if they aren't relying on that clause.)

I think the issue I have is with the 6th OSI definition you pulled out - "No Discrimination Against Fields of Endeavor" - it seems to me like that one could use some tweaking. I do think it's important that the ability to run "Derived Works" is not limited by "field of endeavor", but I think selling managed software as a service could be a specific carve-out to that. It seems totally reasonable and not violating the spirit of "open source" to say you can modify and self-host for any purpose, but you can't re-sell.


> It seems totally reasonable and not violating the spirit of "open source" to say you can modify and self-host for any purpose, but you can't re-sell.

Personally I would heavily disagree with that, and that statement is something I see as against the spirit of open source. In my view, open source and free software mainly intend to use licensing to put the freedoms and rights of the code & it's users in front of those of it's authors. Being able to re-sell has always been a significant point, and part of the spirit, in free software and open source.


It's just an honest disagreement. I don't think your opinion is the way, but I don't begrudge it.

I'm just not an ideologue. What matters to me is having as much software tooling that is as useful to me as possible. I consider tools that I can modify and run myself to be more useful than those that are proprietary. But I don't require or demand the ability to re-sell someone else's software; that isn't a capability that is useful to me. That capability is pretty much entirely only useful to Amazon and Google, and that's just not something I care about optimizing for.


This question is for my education alone, but since you seem quite passionate I am curious.

I just read a super long article about licensing to understand your comment as well as the article you wrote. Under these "source available" licenses, I can still sell the software within some kind of package correct? Like if I create my own PR linter I can use Grai and still sell it? I just can't host grai with some observability and sell it? Or am I misunderstanding?


Just to be clear for my responses, I am not a legal expert in any way.

> Under these "source available" licenses, I can still sell the software within some kind of package correct? Like if I create my own PR linter I can use Grai and still sell it?

"Source available" means the source is accessible. Whether you can sell the software depends on the license. In the case of the Elastic License v2 as used here, I believe you could re-sell the works but you cannot re-license and the original limitations will remain which include providing as a hosted/managed service. There are other limitations too, the limitations around license keys functionality could be a significant hindrance depending on specific use and implementation.

> I just can't host grai with some observability and sell it? Or am I misunderstanding?

That is kind of the most significant limitation, but ultimately you are subject to the detail of all limitations:

~~

>> You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.

>> You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key.

>> You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor’s trademarks is subject to applicable law.

~~

Note that there's nothing about selling at all. Also think about how widely that first limitation could cover different types of use-case. And, as touched on above, that second limitation could be used in quite a protective/combative way to make significant parts of the software unusable in re-use.


Totally fair and appreciate the (well written) thoughts.


> The Elastic license is an attempt to maintain feature parity between the cloud and self-hosted tool while still being protected

I don’t know enough about the elastic license but I very much prefer this approach. I’ve seen a lot of source available projects deliberately refuse to implement features, and just generally let the product managers spend time on dark pattern bait-and-switch to drive sales. It misaligns the incentives, and complicates the product offering. It’s infuriating for developers. This is much clearer for everyone.


> We believe a project like this needs financial backing and a dedicated team driving development

What benefits do you get from being open source other than the OS stamp of approval?

Perhaps the solution is to just go closed source. I'm all for open source, but I'm not the biggest fan of open core or source available. All it does it hurt the business with little benefit to me. I'd rather you make more money and support me or go full altruistic and make it truly open source.


We aren't open source because we want to get anything out of it is the short answer. Of course to each their own but I've personally gotten a ton of value from open core tools in the past.


Don't its customers get the benefit of being able to self-host and modify for their own internal use? Seems like a big benefit to me...


My point is if it's a commercial entity, I'd rather pay them to make the modifications and then maintain it than pay my own engineers to do it.


Yes, but if I want to make larger modifications than would make sense for the core project, I'd like to have the ability to self host my modified version (and ideally have a support contract as well, if they're into that).

So you asked what the point of doing this is for them, from a business perspective. I think the point is marketing / smoothing the sales process. I feel much better about using SaaS products that I know I can self host if necessary, even if I'm unlikely to actually do so.

Frankly, it's just the same reason I prefer any of my tools to be open source. I don't like using proprietary programming languages or frameworks, because I can't fix things that are broken even if I want to. This remains true even though I can count the number of times I've actually done this on one hand.


Thanks for sharing! Seems like this is a dbt-centric lineage tool that surfaces failed tests in the lineage itself?

Unlike a data observability platform like Monte Carlo which proactively monitors data, am I correct in assuming that your solution is less focused on data observability (i.e. monitoring production data and conducting root cause analysis / impact analysis) and more on ensuring reliable CI/CD?


I wouldn't personally draw such a bright line between monitoring and reliable CI/CD. That division definitely exists but partly as a product of the complexity introduced by fragmented data systems. In some ways an ideal world is one where the need for extraordinarily complex monitoring tools is actually pretty limited because we had tools to validate end to end data pipelines before making code changes if that makes sense.

We actually already do data monitoring as well although we haven't built the specific alerting features of Monte Carlo. There are quite a few tools that do that really well so it's not our focus at the moment.


Your intro video started with the assumption, I think, that a team already has some infra relating to this called DBT. It would be nice to have a video for onboarding from scratch assuming there's no data prior effort toward data observability.


Pre-built integrations are a big part of what makes onboarding easy but it sort of ends up in a catch-22 situation where whichever integrations gets highlighted is only directly applicable to the people using those tools.

If you have a different toolset onboarding will look exactly the same though, there's nothing truly DBT specific at work here. It's a good idea though! We really should put together a few other combinations so more people can see their own stack represented.


Just a side note, DBT is being required everywhere these days


I believe you w.r.t. tech-first companies. I work in a tiny software dept in a small service company and we have no infrastructure like this at all. It would be nice to know how I can go from zero-to-Grai.


What does your data stack look like? I'll put something together specifically for you.


Website looks great!

Just FYI, I’m getting a “failed to load search index” error in your docs.

Also I saw GitHub Actions called out in the workflow. Do you have GitLab support?


Thanks so much! Really appreciate the kind words.

We haven't had anyone request Gitlab yet but would love to add support! Any chance you'd be willing to beta test for us? If so, shoot me an email at ian@grai.io :).

EDIT: It looks like the index issue is related to our search provider. Were you able to eventually load the page or is it fully blocking you?


Just tried it again, and I’m still having the same issue with search :(.


Weird, sorry about this! We just removed the search and redeployed the docs. Hopefully that will fix it until we can sort the problem out. Would you mind giving it another shot?


I recently demoed an observability platform with another company, and one of my biggest gripes was that we weren’t able to “observe” the error before it actually made it to the open.

And that it took 2+ weeks to train their models with the table metadata - so time to value for my team was always “in two weeks”.

Glad to see y’all going against that trend!


Really appreciate the kind words. If you don't mind my asking, what sort of issues were y'all experiencing that prompted you to start looking for solutions now?


We had this problem at one of my previous companies; glad to see someone addressing it, and the open-source approach just makes so much sense. Best of luck


What does features does Grai have that makes it superior to DataHub?

As I see it, DataHub already gives you a data lineage. Is that not enough?


I definitely like the flexibility to be able to create a custom solution with a yaml file. Nice idea. All the best!


Elastic v2 if one is interested in such things: https://github.com/grai-io/grai-core/blob/v0.1.33/LICENSE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: