/ Andrew Martin - Meteor-Proof Infrastructure: Reproducible Environments with Container Build Images
Treating our infrastructure as immutable and expressing, versioning and storing it as code is now becoming common-practice. But how are changes to that code observed and tested? How do you know what code was used to build production infrastructure? Can you bring it back if it gets hit by a meteor? Don't be a dinosaur, protect yourself from the inevitable with versioned, tested, and audited container build images. In this talk we describe how to:
- bake and version container images with the config to rebuild environments
- manage secrets separately and securely
- test throughout the pipeline for veracity, security, and compliance
- provision environments to run everything as code
- allow fast local iteration without impacting other users or environments
- recover from catastrophe, intrusion, or the dreaded fat finger of doom
/ About Andrew
Andrew is a co-founder at the Kubernetes and container security-engineering consultancy https://control-plane.io. He has a ardent test-first background gained developing and deploying high volume web applications. Proficient in application development and systems architecture and maintenance, he is comfortable profiling and securing every tier of a bare metal or virtualized web stack, and has battle-hardened experience delivering containerised solutions to enterprise clients.
/ Dan Cook - Bank to the Future: Bitcoin meets Hadoop
The Apache Hadoop ecosystem has long been 'relegated' to offline analytics and discarded as a technology of choice to build online transactional processing systems. If the Bitcoin hype is to be believed we can ditch our old model of transactional locking and relational databases to build a better bank.
We’ve long assumed that in order to move money from one account to another reliably that we need to employ transactions to wrap the debit from one account and the credit to another. The blockchain has challenged this thinking, recording all transactions on a ledger in an append only manner. Let’s take the ideas of Bitcoin but apply them with the original append only king Hadoop to build a bank that can scale its processing to millions of transactions per second and can store petabytes of data. And for good measure we’ll look at Apache Kafka, the Hadoop Distributed File System, Spark and Accumulo along the way.
/ About Dan
Dan is a Technical Architect and Developer. He has led the development of the UK Hydrographic Office’s Hadoop-as-a-Service offering and built software streaming frameworks long before the rise of Apache Spark and Storm. More recently he helped build a real time alerting pipeline that processed in excess of 150,000 events per second utilising Apache Kafka and Spark Streaming. Most notably all of his talks are delivered in a Yorkshire accent.
There are no comments. Be the first one!