ChaosSearch Blog - Tips for Wrestling Your Data Chaos

Let’s Talk About the ELK in the Room

Written by Kevin Davis | Apr 4, 2019

Intro

If you’ve worked at any company over the last 5 years and haven’t at least heard of Elasticsearch, I would be surprised. It is one of the most widely used open-source projects available to developers, and it’s so powerful that some companies are actually built around it or directly on top of it. It's certainly not the only tool that is available either. You could use Solr as an alternative, and in some cases it's just as feature-rich. Like any tool though, there is always a trade-off between cost and functionality; you rarely get both. That is why I want to talk about the ELK in the room — because we feel you can. As others have looked to use Elasticsearch in their product, CHAOSSEARCH has taken a different approach. We do not run any Elasticsearch within our platform — it is all powered by our own proprietary technology.

Even though the title of this post says ELK, I really want to just talk about the E (Elasticsearch) part of the acronym. For anyone unfamiliar with Elasticsearch, it is an open-source, RESTful, distributed search and analytics engine. For me, it’s been the one tool my previous companies have used for running complex queries on large datasets at scale. As powerful as Elasticsearch is, do you ever find yourself saying “it can be complicated to manage, it takes more time than anyone ever expects, and the cost is hard to justify”? Hopefully, I can provide some clarity around those statements to show how CHAOSSEARCH will help you turn your S3 infrastructure into a searchable elastic cluster and help replace your current Elasticsearch deployment.

The set-up process of an Elasticsearch cluster is relatively straightforward; usually, configuration management tools like Chef or Puppet can handle the building of the cluster, or you can use their cloud version. The next challenge comes when you have to start setting the appropriate sizing and resizing for the nodes, figuring out who will be masters and how many you will need, building your indexes, and deciding where to send your data. Again, the cloud version may help with this process some, but you’re still responsible for the overall management.

In-house vs. managed

Once you’ve decided to take on the day-to-day management yourself or hand it off to another vendor, Elasticsearch can become a significant investment. Most aspects will be fixed, but the parts that matter won’t be — like time and resources.

As I started to do research on this topic, a few helpful guides were shared with me so that I could further educate myself on what a cost breakdown would like. The real cost comes down to the running of the actual hardware; Elasticsearch is an open-source tool. You could make Elasticsearch extremely cheap to operate, but you’re also going to have an awful time. No one willingly chooses to have a bad experience, so assuming we select the appropriate hardware requirements, the cost is on you, and the longer you keep your data, the more it will cost. In any cost breakdown, Operations cost (your Employees) will always be the highest because it will be the most significant unknown. Knowing this, it makes complete sense that organizations often consider a managed option.

For the managed option, what is it that you’re actually getting? Sure, no headache of managing the cluster and all of those nuances, but why sacrifice ownership of your data or how long you can analyze that data? Eventually, you’ll want those logs back, you’ll need to store those logs, and you’re more than likely going to save them in S3 if you’re an AWS customer — 2 steps forward and 1 step back. We’ve all been there: Customer X needs to debug something, but we just hit our retention window, and now those logs are long gone.

Store everything and ask anything

At first, we thought our customers wanted to extend the capabilities of their existing ELK stack. What they really want is to remove the complexity of managing Elasticsearch. No cluster, no nodes, no sharding, and no more worrying about sadness. While we can be an extension of your hot Elasticsearch cluster, we’re learning that as organizations grow, their data footprint grows as well. As this data increases in volume, it becomes more complex, and you may start asking “What do we need for compliance?”, “What do we want for testing?”, and “What do we care about?” For CHAOSSEARCH customers, log management becomes simpler. You’re not required to move data from here to there and back — all we need is you to store your data in your S3 account. Get your logs in, analyze them, and then get those logs out to another service like Glacier or completely remove them altogether. No need to keep playing musical chairs for your data — once it's indexed, you can delete the source data, and the data is still available for search.

Most if not all of our customers are already using many Amazon services, making it a logical next step to ship the logs to S3 or take advantage of Logstash to gather data from multiple sources to then push to S3. Once the data is in S3, CHAOSSEARCH gets to work. There is no cluster or node management because we’ve separated compute (we manage this) and storage (all in your AWS S3 account). No more mappings to build because CHAOSSEARCH will auto-discover your schema and start to create the mappings as we index your data. If you missed it, go back and adjust the schema without having to reindex. All of your structured log data that has been sitting in S3 or is being sent to one centralized location is now fully indexed and available for search directly in Kibana.

These conversations we have with customers are usually broken into thirds: ⅓ has been waiting for something this simple and useful for S3, and the next ⅓ think we’re crazy. But the rest, well, they’re indexing terabytes of data that have been sitting in S3 and getting insights that their competition can’t.

Request Free Trial