AWS Re:Invent Re:View

Gabriel Tocci
9 min readDec 8, 2023

--

At AWS re:Invent last week, the spotlight was on Generative AI. This included managed cloud services for developing AI solutions (AWS Bedrock), generative AI assistants (Amazon Q), and the integration of AI features into several existing AWS services.

If you missed re:Invent, this is a summary of the event with a focus on services relevant to hosting ERP systems in AWS.

This document is separated into the following sections:
- Generative AI
- Security
- Cost Optimization
- DevOps
- Data Engineering
- Hardware

Generative AI:

*Many of which are in preview

Amazon Q was the announcement that captured the largest buzz during AWS re:Invent. Amazon Q is a superset of Generative AI AWS services built on Amazon Bedrock, which is a fully managed service that provides foundation models (image, multimodal, and text) via a fully managed API.

One of the main selling points for Q is that it’s designed for enterprise customers. It is reported to respect permission boundaries, and only return responses using data the user has access to. Also Q can integrated with over 40 popular business applications as data sources (Microsoft Office 365, Google G-Suite, Slack, Jira, and of course AWS services).

Amazon Q Assistant is a generative AI chatbot designed for querying AWS systems. This release has received a fair amount of criticism and appears a few iterations away from creating consistent value for users. The goal of this tool is to ask natural language questions about your AWS Service usage and get recommendations for optimization or new solutions.

Unfortunately, I was not able to generate any helpful responses in my testing. Here are a few concrete examples:

This test was a little better, but the response appears to be a generic AWS answer, and I was expecting the answer to be tailored to my AWS resource usage:

AWS CodeWhisperer has a new Q integration that performs code review, with a focus on evaluating security vulnerabilities or dependencies. It is similar to GitHub copilot, and will scaffold code for new features in your application.

Q Transformation is a new feature that upgrades your Java code to a new version.

Q for Insights (Generative BI) allows users to ask business data questions related to your business data hosted in AWS QuickSight. Here is a great (short) demo of the feature: https://www.youtube.com/watch?v=uBG7lFXV6II

Amazon Q Data Integration in AWS Glue is a new generative AI capability of AWS Glue that enables data engineers and ETL developers to build data integration jobs using natural language. Engineers and developers can ask Q to author jobs, troubleshoot issues, and answer questions about AWS Glue and data integration.

Security

AWS Security Hub now supports multiple regions and accounts, sort of. The way this works is that the data is replicated to your primary/aggregation region, and you will manage all the data there. Security Hub is not automatically enabled for this cross-region aggregation configuration. This is much better than having to change accounts and regions to see findings.

AWS Security Hub is priced by finding, ingestion from external sources, and automation rule evaluation. It is not billed for data storage, so I don’t believe cross-region replication would increase the cost, so long as you are not running automation rules in both regions.

Amazon Detective introduces investigations for Identity and Access Management (IAM) Users and Roles. If you are not familiar with Amazon Detective, this service collects and analyses logs from your AWS Services, then uses machine learning, statistical analysis, and graph theory to help you conduct faster and more efficient security investigations and identify indicators of compromise (IoC). It can now perform this analysis on IAM entities (users and roles).

AWS Identity and Access Management (IAM) released two new features, the Unused Access Analyzer and Custom Policy Checks.

The AWS IAM Unused Access Analyzer is designed to actively monitor existing roles and users to identify permissions that have been granted but remain unused. Security teams are provided a new dashboard that shows underutilized permissions, roles, and IAM users.

AWS IAM Custom Policy Checks enable validation of new policies to ensure it does not inadvertently grant excessive permissions. If you have CI/CD pipelines for creating policies, you can include these custom policy tools for precise control over your IAM policies.

Amazon GuardDuty released (in preview) runtime threat detection of workloads running on AWS EC2 Instances. It provides visibility into the host, operating system, and container-level contexts to detect threats. The integration of GuardDuty with AWS Organizations allows you to centrally activate runtime threat detection coverage for accounts and workloads across the organization, streamlining your security coverage via a native AWS service.

Elastic Kubernetes Service (EKS) Pod Identity now supports IAM roles so you can natively map Kubernetes service accounts to AWS IAM Roles. This eliminates the need for third-party tools such as kube2iam or kiam to create service level isolation, or IAM instance profiles on EKS nodes.

Cost Optimization

AWS Cost Optimization Hub is a new service that provides a nice consolidation of information from 15 disparate AWS cost optimization features. These include suggestions for EC2 instance rightsizing, Graviton migration, detecting idle resources, and offering Savings Plans recommendations. The consolidated information is accessible through a unified dashboard across your AWS accounts and AWS Regions within your organization. The Cost Optimization Hub is free to use, and the only negative caveat is that it takes 24 hours to populate with data.

AWS CloudWatch has a new storage class (CloudWatch Logs IA) for infrequently accessed log files that should create real savings (~50%) in log storage. CloudWatch also now supports queries using natural language and includes anomaly detection, which seems useful.

AWS Backup now supports the EBS Snapshots Archive Tier. This will help reduce the cost of long-term backup storage.

Amazon Elastic File System (EFS) has a new throughput mode called Elastic Throughput, which is different than Burstable Throughput. EFS is an expensive service and optimization is complex because you must consider storage size, throughput (I/O) requirements, and storage tier (Standard, Infrequent, and Archive). This new throughput mode will help with workloads that have unpredictable throughput requirements. Elastic throughput will scale based on load, rather than size, which is how burstable [credits] works. With Elastic Throughput you can also transition your storage tier to Archive storage, for additional storage savings on long term file storage.

Amazon OpenSearch now has its own instance family (OR1). These instances are specifically designed for indexing-heavy workloads, and [according to AWS] outperform existing memory-optimized instances and create up to 30% improvement in price performance.

DevOps

AWS CloudFormation now supports GitOps via a new feature named GitSync. This means you can commit/push your CloudFormation code and CloudFormation will update the stack with changes to your template. This sounds great, but it does not support AWS CodeCommit. It supports only GitHub, GitLab, and BitBucket.

AWS Console-to-Code is a new feature that enables the creation of AWS CloudFormation templates via the console, aka Click-Ops. Unfortunately, this feature is only supported in EC2 and is far behind in maturity from AWSConsoleRecorder in terms of service support and output languages/formats (terraform, cli, python, etc.). https://github.com/iann0036/AWSConsoleRecorder

The AWS Elastic Container Service (ECS) dashboard now includes alarm suggestions as code for metrics. This makes adding new alarms to your IaC codebase a snap.

Amazon Managed Prometheus (AMP) collector provides agentless metric collection for Amazon Elastic Kubernetes Service (EKS). This is a great feature, as it will eliminate the need to install and configure Prometheus exporters in your EKS clusters. Right now, this only includes node exporter and kube-state-metrics. But this is a good start.

Data Engineering

AWS B2B Data Interchange is a fully managed service for Electronic Data Interchange (EDI). This could be a big help for teams hosting systems in AWS that require EDI integrations. This service enables you to set up EDI integrations without having to manage the underlying infrastructure.

You can exchange EDI data with your integrated partners using the communication protocol or connectivity tool of your choice, including the AWS Transfer Family. You can also automate the transformation of incoming EDI documents into JSON and XML using a low-code mapping/template editor. The mappings can be reused across multiple partner integrations or associated with a single integration.

This service costs $8 per month per integration and $.01 per transformation. It is only available in the US Regions. All files are stored in S3, and integrated with existing AWS Services you would expect like CloudWatch for logging and metrics/alerts.

Amazon RDS now supports IBM DB2 databases. For clients that run DB2, this offers this database engine with the full suite of RDS features: high availability with multi-AZ deployment, disaster recovery solutions, and cross-region backups. The only version I found available was 11.5.9.0.

Amazon ElastiCache now offers a serverless version. With serverless ElastiCache you can create a highly available and scalable cache in under a minute, eliminating the need for planning, provisioning, and managing cache cluster capacity.

These serverless caches automatically and redundantly store data across multiple Availability Zones (AZs) for four nines availability. Pay only for the data stored and compute resources consumed by your workload.

Amazon Aurora has introduced sharding (declarative table partitioning) for PostgreSQL-compatible instances, referred to as the Aurora Limitless Database. This will enable globally distributed databases to execute transactions in parallel, with a single serverless endpoint. This feature is in limited preview only, so you must request access.

If you operate a PostgreSQL Aurora cluster and need to handle millions of write transactions per second and effectively manage petabytes of data, this may be a helpful feature for you.

Amazon S3 Express One Zone is a new storage class that supports single-digit millisecond latency and millions of requests per second. This is 10x faster than the standard storage class, but it comes at a premium price with lower reliability (single Availability Zone). Express One Zone costs $.16 per GB, a 7x increase over the $.022 per GB standard tier.

Several Zero-ETL Integrations were announced, three for Amazon Redshift (Aurora PostgreSQL, DynamoDB, and RDS for MySQL), and three for Amazon OpenSearch (DynamoDB, S3, and S3 Data Lake).

Zero-ETL integrations promise that you can transfer data from disparate sources to your analytics systems without building a custom transformation pipeline. The difficulty with this approach is that ETL transformation rules typically contain valuable business logic that is required for the analytics engine to compare apples to apples. This service appears to be a managed replication to RedShift, which is nice if it fits your use case.

AWS Glue Data Quality is a new feature that enables anomaly detection by applying machine learning (ML) algorithms over time to detect abnormal patterns and hidden data quality issues.

Hardware

Graviton4 and Trainium4 are the latest generation of Amazon’s proprietary processors.

Graviton4 processors are Arm-based chips meticulously crafted to deliver superior price performance for workloads running on Amazon EC2. Graviton4 outperforms its predecessors, offering up to 30% improved compute performance, 50% more cores, and 75% enhanced memory bandwidth compared to the current Graviton3 processors.

Trainium4 processors are specifically designed for deep learning algorithms and support efficient data and model parallelism. They are engineered to deliver up to 4 times faster training than its predecessor, they can be deployed in EC2 UltraClusters, and they improve energy efficiency by up to 2 times.

AWS Thin Client was released as a simple way to connect to AWS Workspaces (virtual desktops). If your organization uses AWS Workspaces this could be a big benefit. There is also an AWS console that can be used to centrally monitor, manage, and maintain devices and their connectivity to AWS Workspaces. Also, AWS Workspaces added cross-region replication for user data. Here is a hands-on review of the device: https://techcrunch.com/2023/11/27/amazons-new-195-thin-client-looks-just-like-a-fire-tv-cube/

Amazon One Enterprise is a Biometric (Palm reader) identity device and service for access to physical spaces.

Still Reading? Great!

Drop me a line and let me know what you think of this Re:invent Re:view:
www.gabrieltocci.com

--

--

Gabriel Tocci

www.gabrieltocci.com | Senior Cloud Architect and Engineer | Industry Leader in Higher Education