NEWS ALERT! Amazon S3 Express One Zone delivers huge gains for ChaosSearch customers - check out the Amazon blog -->
NEWS ALERT! Amazon S3 Express One Zone delivers huge gains for ChaosSearch customers - check out the Amazon blog -->
Start Free Trial

ChaosSearch Blog


Leveraging Amazon S3 Cloud Object Storage for Analytics

With its simplicity, flexibility, and cost-efficient characteristics, Amazon Simple Storage Service (Amazon S3) cloud object storage has become the preferred platform for collecting, analyzing, and retaining today’s growing mountain of diverse and disjointed enterprise data.

And as AWS continues to grab market share in the hyperscale IaaS/PaaS/SaaS marketplace, organizations of every size are leveraging Amazon S3 to underpin a variety of use cases, such as:

  • Running cloud-native and mobile applications/web services,
  • Archiving data for regulatory and compliance purposes,
  • Enabling disaster recovery from the cloud, and
  • Establishing enterprise data lakes to facilitate big data analytics, unlocking insights.

Our blog this week features an in-depth look at the state of Amazon S3 object storage in 2022, including the benefits and challenges that enterprises face when storing data with the world’s leading cloud storage platform.


Amazon S3 Object Storage for Log Analytics


What is Amazon S3 Cloud Object Storage?

Amazon S3 is a cloud-based object storage service offered by AWS. Although the concept of object storage has been around since the mid-1990s, it only began to gain true popularity after AWS began offering it as their first cloud service in 2006.

Three key factors have contributed to the dramatic rise and ongoing popularity of object storage:

  • Unstructured Data Growth - Over the past decade, enterprises have seen exponential growth in unstructured data from a variety of sources. Object storage can store data in “any/all formats” that wouldn’t otherwise fit into the rows and columns of a traditional relational database, such as emails, photos, videos, logs, and social media.
  • Meeting Compliance Requirements - Object storage includes metadata that represents an instruction manual for users. For compliance regimes that demand strict constraints on data use, object storage metadata represents an essential guardrail.
  • Adoption of Cloud Computing - The killer application for object storage is unquestionably cloud computing. An object stored in the cloud is given an address that allows external systems to find it no matter where it’s stored, and without knowing its physical location in a server, rack, or data center.

Unlike traditional databases that store relational data in a structured format (rows and columns), Amazon S3 object storage uses a flat, non-hierarchical data structure that enables the storage and retention of enormous volumes of unstructured data.


What are the Benefits of Amazon S3 Object Storage?



Amazon S3 object storage provides a highly scalable repository for all kinds of unstructured data. AWS’s pay-as-you-go model for data storage in S3 means that organizations can start small and scale their usage over time, with AWS’s global network of data centers providing what essentially amounts to unlimited storage space.

And it isn’t just your storage space that scales - it’s also your ability to execute requests on your S3 buckets. S3 automatically scales to manage high request rates of up to 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. AWS users can also use partitioning and parallelization techniques to further scale read/write operations in Amazon S3.



AWS defines durability as the probability that an object in cloud storage will remain intact and accessible after a period of one year. Across all storage tiers, Amazon S3 object storage delivers 99.999999999% durability. In practice, this means that if you stored 10,000 objects in S3, you could expect to lose a single object every 10,000,000 years.

Highly durable AWS object storage is a hair's-breadth away from making data loss events a thing of the past. Features like secure access permissions, versioning, backups, and cross-region replication allow organizations to essentially remove the possibility of a data loss event that impacts operations or compliance.



Data stored in the AWS cloud can be accessed over the Internet from anywhere in the world. S3 buckets are highly secure and private by default, but users can also choose to make their S3 buckets publicly available with a few simple configuration changes. AWS customers can also use features like S3 Object Ownership and S3 Access Points to control which users at which locations have access to which data.

There’s also the Amazon S3 API, which enables programmatic access to data stored in S3 buckets, allowing external developers to write code that uses S3 functionality or accesses data in their cloud object storage.



Cloud computing has made us all richer. By moving data storage from on-prem servers and into the cloud, organizations have been able to reduce their capital costs and accelerate innovation. Massive data centers give AWS powerful economies of scale, making Amazon S3 object storage the most cost effective storage option for enterprise data.

S3 Storage Classes allow AWS customers to lower their data storage costs even further, with six classes of cost-optimized object storage that satisfy a full spectrum of data access and resiliency requirements.

At one end of the spectrum, Amazon S3 Standard is generally the best choice for regularly accessed data. At the other end, S3 Glacier Deep Archive offers the lowest cost storage for long-term archive data and data preservation use cases.

READ: Troubleshooting Cloud Services and Infrastructure with Log Analytics


How to Use Amazon S3 Object Storage for Data Analytics and Analysis


What are the Challenges of Using Amazon S3 Object Storage for Data Analytics?

Benefits like cost-effectiveness and scalability have driven enterprise organizations to start using Amazon S3 object storage for their data needs. Yet despite the growing popularity of S3, data-driven organizations have historically faced challenges when it comes to identifying, standardizing, and analyzing data in S3 object storage.

Three major factors can make object storage analytics feel complicated and distant for enterprise organizations.


Data Visibility Challenges

Data lakes have always been a promising use case for Amazon S3 object storage, but as organizations ingest exponentially more data into S3 buckets, it becomes more complex and time-consuming to implement features like metadata tags that help data scientists know what data is available in S3 buckets.

The term “data swamp” was coined to describe this exact situation, where an influx of unstructured, untagged, poorly organized, or poorly managed data slows down data lake operations and prevents organizations from leveraging their data to its full potential.

READ: 10 AWS Data Lake Best Practices


Unstructured Data Format

Amazon S3 object storage is not a traditional database.

While traditional database applications were designed and developed to meet yesterday’s requirements for managing structured and relational data in tables (columns and rows), Amazon S3 was designed to meet today’s requirements for storing and managing diverse and disjoined data from disparate sources.

There’s no problem with using a relational database for structured data and Amazon S3 for unstructured, but note the following: Most data analytics tools are set up to use relational databases. Since the data you store in Amazon S3 object storage is not relationally formatted, most architectures require you to process and transform it before it can be efficiently analyzed.


Data Movement and ETL

Data movement and the ETL process are among the biggest barriers when it comes to leveraging Amazon S3 object storage for analytics.

Before you can run analytics on data in your cloud object storage, you’ll need to invest valuable time and resources to clean and prepare your data, transform it into an analytics-friendly format, and load it into your analytics platform - a process known as ETL.

Not only is ETL time-consuming, with data cleaning and preparation taking up to 80% of the total time to perform data analytics, it introduces additional complexity, increases compute and storage costs, creates data privacy and compliance gaps, and delays insights, ultimately limiting the value of your data.

READ: How to Move Kubernetes Logs to S3 with Logstash


A Modern Way to Activate Your Amazon S3 Object Storage for Analytics

Shameless plug: There’s a new way to activate your S3 object storage for analytics without any physical data movement. The ChaosSearch Cloud Data Platform transforms your Amazon S3 object storage into an analytical data lake, enabling robust data analytics at scale and delivering on the true promise of data lake economics.

Our fully managed service makes it easy to get started: just push your data into Amazon S3, index data in-place with ChaosSearch, and enjoy Multi-API access for Log Analytics via Elastic API, BI Analytics via SQL/Presto API, as well as machine learning.

Ready to try it for yourself?

Start Your Free Trial


Additional Resources

Read the Blog: Best Practices for Modern Enterprise Data Management in Multi-Cloud World

Watch the Webinar: Make Your Data Lake Deliver - AWSInsider

Check out the Whitepaper: Ultimate Guide to Log Analytics: 5 Criteria to Evaluate Tools

About the Author, Thomas Hazel

Thomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM). More posts by Thomas Hazel