BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Pentabarf//Schedule 0.3//EN CALSCALE:GREGORIAN METHOD:PUBLISH X-WR-CALDESC;VALUE=TEXT:Software Defined Storage devroom X-WR-CALNAME;VALUE=TEXT:Software Defined Storage devroom X-WR-TIMEZONE;VALUE=TEXT:Europe/Brussels BEGIN:VEVENT METHOD:PUBLISH UID:14406@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T103000 DTEND:20230204T111000 SUMMARY:Lessons learnt managing and scaling 200TB glusterfs cluster @PhonePe DESCRIPTION:
We manage a 200TB glusterfs cluster in production. While we were managing this, we learnt some key points. In this session, we will share with you:
As the number of clients increased we had to scale the system to handle the increasing load, here are our learnings scaling glusterfs
vhost-user-blk is a userspace block I/O interface that has traditionally been used to connect software-defined storage to hypervisors. This talk covers how any application that needs fast userspace block I/O can use vhost-user-blk and its advantages over network protocols. A client library called libblkio is available for C and Rust applications will be introduced. The protocol is also summarized for those wishing to understand how it works or implement it from scratch.
This talk is intended for developers interested in connecting applications to SPDK or qemu-storage-daemon and those who want to know more about software-defined storage interfaces.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_vhost_user_blk/ LOCATION:D.sds ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Stefan Hajnoczi":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14592@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T111500 DTEND:20230204T114000 SUMMARY:Present and future of Ceph integration with OpenStack and k8s DESCRIPTION:OpenStack and Ceph have a long integration story that changed over the time to make the two technologies coexist in the same context as basic building blocks of Cloud infrastructures.ceph-ansible has been one of the most popular orchestrators for Ceph, but cephadm and the Ceph orchestrator have been a game changing in the way how operators interact with Ceph.To streamline the deployment process, OpenStack services need to be configured to interact with Ceph but there is also a need to bootstrap, configure and tune the Ceph cluster to meet the OpenStack workload.An example is Manila, where the new ceph mgr interface enabled new drivers and simplified the existing use cases.This talk will give an overview of the current state of the integration, and how projects in the OpenStack ecosystem changed and updated the reference architecture as a result of the introduction of cephadm and the Ceph orchestrator but also look towards at the Kubernetes integration, when a single Ceph cluster can be shared by OpenStack (rbd interface) and the Kubernetes workloads via pvc.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_ceph_openstack/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Francesco Pantano":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14123@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T114500 DTEND:20230204T120500 SUMMARY:SQL on Ceph DESCRIPTION:Ceph was originally designed to fill a need for a distributed file systemwithin scientific computing environments but has since grown to become adominant unified software-defined distribute storage system. Today, it isalso notably used as an enterprise-quality block device and object storeprovider. This talk will cover the new development of an SQLite Virtual FileSystem (VFS) on top of Ceph's distributed object store (RADOS). I will show howSQL can now be run on Ceph for both its internal use and for new applicationstorage requirements.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_sql_on_ceph/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Patrick Donnelly":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14506@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T121500 DTEND:20230204T125500 SUMMARY:Dynamic load change in SDS systems DESCRIPTION:This presentation describes the new read (aka primary) balancer that is added to Ceph next version (Reef) and explains how the framework developed as part of this balancer for more sophisticated use cases. Specifically, it shows how you can use this framework and creates a policy that changes the SDS load dynamically so it can mitigate effects such as noisy neighbors and faulty network devices (NICs or ToR switch) without moving data around. This can be very useful when the effects described are temporary (for example noisy neighbor in hyper-converged environment)
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_dynamic_load_change/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Josh Salomon":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14683@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T130000 DTEND:20230204T134000 SUMMARY:s3gw: easy to use S3-compatible gateway for small and edge deployments DESCRIPTION:In this talk we will present SUSE's storage team's latest passion project, s3gw (https://s3gw.io), an easy-to-use S3-compatible service for kubernetes environments. Although focused to work on top of Longhorn (https://longhorn.io), s3gw can leverage any local filesystem or Persistent Volume provided to it. The project is divided in two main components: the s3gw service, a Ceph RADOS Gateway with a custom, filesystem based backend, leveraging RGW's SAL implementation; and the s3gw UI, not only for management tasks but also providing a bucket and object explorer.During our time together we will discuss s3gw's backend implementation, and present the UI, with a small demonstration of how to deploy the project on a small kubernetes cluster. With this talk we would love to also gather feedback from the attendees, so we can feed back into project development.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_s3gw/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Joao Eduardo Luis":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:13707@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T134500 DTEND:20230204T142500 SUMMARY:Ceph RGW and Zipper DESCRIPTION:Ceph RGW (RADOS Gateway) is an interface to Ceph, providing access to Ceph object storage using the industry standard S3 and Swift protocols.
Zipper is a project currently underway to provide a plug-in framework to utilize other storage solutions, e.g. an SQLite database, in addition to or instead of Ceph RADOS.
Other related activities include adding LUA scripting, an Apache Arrow Flight front end, and pluggable, stackable filters.
In this talk we will provide a high level overview of the overall Ceph architecture, and then drill in to the RGW architecture with the Zipper enhancements. We will do a deeper dive into the source and review the Zipper API that developers use to write a Zipper plug-in.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_ceph_rgw_zipper/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Kaleb Keithley":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:13957@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T150000 DTEND:20230204T154000 SUMMARY:Operating Ceph from Ceph Dashboard DESCRIPTION:The talk will give you an overview of managing Ceph with the Ceph Dashboard and how we tried to simplify the management of the Ceph cluster. We will talk about the current architecture of the Ceph Dashboard and how you can easily deploy and manage and monitor the Ceph cluster. This talk will also cover the current and newly added features of the Ceph Dashboard and also talk about its future. This will also cover how as a developer and user you can contribute to the Ceph Dashboard.
We will also have a demo at the end where we'll show how easily we can deploy the Ceph Cluster starting from zero and then how you can manage different components of Ceph and monitor the insightful information of the cluster.
Agenda: Introduction to Dashboard, Why we need management, Architecture of Dashboard, Key features, what's coming Next?, Demo
Target audience: Ceph, Ceph Management and Monitoring
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_ceph_dashboard/ LOCATION:D.sds ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Ankush Behl":invalid:nomail ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Nizamudeen A":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:13923@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T154500 DTEND:20230204T162500 SUMMARY:Intro to Ceph on Kubernetes using Rook DESCRIPTION:In this talk we are going to introduce you to the Rook Ceph Operator, which can be used to run Ceph clusters with ease on top of Kubernetes clusters. In helping make it easy to run a Rook Ceph cluster we will also be talking about the current state of the project development, the kubectl krew plugin and some more advanced features.There will be a demo about the rook-ceph krew plugin on how it is used to automate common management tasks and can make the troubleshooting process easier.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_rook_ceph/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Alexander Trost":invalid:nomail ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Gaurav Sitlani":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:13723@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T163000 DTEND:20230204T165500 SUMMARY:AMENDMENT Autoscaling with KEDA - Object Store Case Study DESCRIPTION:Scaling your object store is complex and payloads vary in size - objects can be as large as virtual machine images oras small as emails. In behaviour - some are mostly reading, writing, and listing objects. Other payloads deleteobjects, and some keep them forever. Using CPU and RAM to autoscale the pods horizontally or vertically is limitedand may adversely affect the system. Treating our object store as a queueing system: converting HTTPrequests to actions on disks may be the solution!Please note that this session was originally scheduled for 18:30.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_keda_object_store/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Jiffin Tony Thottan":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14042@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T163000 DTEND:20230204T170500 SUMMARY:CANCELLED Container Storage Interface Addons DESCRIPTION:Please note that this talk was cancelled.The aim of this session is to discuss about Container Storage Interface (CSI), its specification anddetails on additional advanced operations provided by CSI-Addons.CSI specification defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a CSI compatible plugin. The interface declares the APIs that a plugin MUST expose: this is the primary focus of the CSI specification.The CSI-Addons project hosts extensions to the CSI specification to provide advanced storage operations. By adding new procedures to the CSI-Addons Specification, additional operations for storage systems can be provided. The reference implementation is done on Kubernetes, and maintained in the Kubernetes CSI-Addons repository. Some of the advanced storage operations that are currently supported are reclaim space, network fence, volume replication and encryption key rotation.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_csi_addons/ LOCATION:D.sds ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="yati padia":invalid:nomail ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="rakshith-r":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:14246@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T170500 DTEND:20230204T174000 SUMMARY:CANCELLED Monitoring and Centralized Logging in Ceph DESCRIPTION:Please note that this talk has been cancelled. The speaker is no longer able to attend FOSDEM.The objective of the talk is to highlight the various aspects and importance of two of the pillars of Observability: Metrics & Logs in Ceph Storage cluster. We will talk about the current architecture of metrics collection and logging, technology stack used and how you can easily deploy them in Ceph.This talk will also highlight the various aspects and importance of Centralized Logging, which can be very useful to view and manage the logs in a Dashboard view.We will also have a demo at the end where we'll show deployment of monitoring and logging services from Ceph dashboardDemos:Monitoring demo:Diagram showing metrics collection architectureDeployment of monitoring stack (ceph-exporter, Prometheus, Grafana)Prometheus targets and query pageGrafana dashboards embedded in Ceph dashboardCentralized logging:Diagram showing centralized logging architectureDeployment of log collector an aggregation services (Promtail & Loki)Pattern based filtering in Loki
Agenda: Introduction to Monitoring and Centralized logging Dashboard in Ceph storage cluster and Demo
Target audience: Ceph, Monitoring, Admins / DevOps / SREs.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_monitoring_ceph/ LOCATION:D.sds ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Avan Thakkar":invalid:nomail ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Aashish Sharma":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:13943@FOSDEM23@fosdem.org TZID:Europe-Brussels DTSTART:20230204T174500 DTEND:20230204T182500 SUMMARY:CANCELLED First class support in OSS DESCRIPTION:Please note that this talk has been cancelled.A natural consequence of Software Defined Storage is that your software must operate on a wide variety of platforms that you, as a developer, have little control over. Therefore a big challenge can be to perform system inspection to help identify software and hardware bottlenecks and issues.
When you are dealing with customers who expect a large amount of confidentiality, how can you get detailed information on a system level? How do you provide system diagnosis that goes beyond “regular” Prometheus style monitoring?
In this talk I will show the tools we have developed that allow our customers to validate and get instant status of their clusters. I will demonstrate how I and our other engineers can rapidly generate information about system setups and detailed metrics for assisting in setup and other support questions.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Software Defined Storage URL:https:/fosdem.org/2023/schedule/2023/schedule/event/sds_first_class_support/ LOCATION:H.2214 ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Klaus Post":invalid:nomail END:VEVENT END:VCALENDAR