DockerCon

How do I stuff that huge monolithic application into a container?

Kevin Barfield, Director, Solutions Architect, Docker

Recorded on November 14th, 2023
Learn more about monolithic applications and the paths forward to containerizing them with Docker.

Transcript

Hi everybody. Welcome to “How do I stuff that huge monolithic application into a container?” My name is Kevin Barfield. I’m a Principal Solutions Architect here with Docker. Before I was in this role, I was a Java developer, and then I was a Java architect, and then I sold Java middleware. So you probably can guess there’s going to be Java in this presentation.

All right, let’s talk about our agenda. We are going to talk about container ideals and monolithic application realities. For this section, I’m going to put myself into a role as a new enterprise architect who has been given a new responsibility and a kind of a nasty surprise in there. And then we’re going to jump into monolithic application modernization and containers. So I’ll talk about different strategies to modernize applications and how containers can help with that. To be clear up front, this is going to be at an architectural level. I’m not going to be breaking out code or Docker files or any of that kind of stuff. I’m just going to be talking about strategies and ways to approach things.

Table of Contents

    Container ideals and monolith realities

    All right, container ideals and monolith realities. Suppose I am a new enterprise architect at the company. I have been told that I’m going to be getting a new project, and I’ve got some ideas about what to do with it. As somebody who’s done development with containers, I believe in containers, I believe in the benefits of containerization, I want to spread that far and wide.

    What are the benefits of containerization? The first is that containers provide process level isolation. I can isolate the amount of CPU. I can restrict the amount of storage. I can restrict the networking. I can restrict the memory used by this particular container. This means that I can pack more resources onto the same hardware, which is great. I could also conclude all the dependencies the applications needs into the container. So I don’t have any developers say, well, it runs on my machine anymore.

    Containers allow easy application distribution and movement. I can pick up a container. I could go run it on a different machine, and it’s going to run the same way. And so we can feel comfortable that we can actually move these things around easily. Containers use the OCI standard. This means it doesn’t matter if I built my container using Docker, or if I run it using Rancher, or somebody else’s product. If I could apply to OCI standard, that means all the major DevOps tool vendors, all the cloud providers are going to support it. And finally, and in my opinion, most importantly, as an architect, developers have a better experience working with containers to build applications.

    Container best practices

    For all the reasons that I just mentioned, it’s just easier to use containers to do this stuff. So I also, as the enterprise architect, believe in the best practices. I’ve done a lot of new development with containers, and I believe in these best practices.

    • Each container should do one thing and do it well. If I can run a bunch of containers on the same machine, why would I put a bunch of different things into the same container? I can have one process doing one thing. It’s great.
    • Don’t ship development tooling into production. Please don’t do this. If you got development tooling, keep it down to development. Don’t put it in production. If you put it in production, all you’re really doing is growing the attack surface for hackers.
    • Reduce the size of the image where possible. So take out the things that the container doesn’t need. If you have built artifacts, that kind of thing that you’re putting in there, strip those out as you’re building your image. Include only what is needed to run the application. So if there’s anything that’s not required to run the application, it shouldn’t be in the production image.
    • And finally, start simple and get more complicated as needed. Nobody needs to start with the most complex architecture and then work the way down. Containers make it easier to start simple and move up.

    Now, let’s take just a moment here and talk about this. This is easy for that enterprise architect because they’re doing all new developments. Anytime you’re doing new developments, all of this is easy. It’s all straightforward. It’s all very good to use. But now let’s talk about the new project the enterprise architect has gotten. So here is the new project this person gets to work on. Because I’m a Java person, we’re going to go to Java examples.

    So this application was first deployed to production in 2010 and functionality was added over time. This application is mission critical to the business. So no, you’re not getting rid of it. That application is staying there in regards to whether you fix it or not. So we start off with our Java virtual machine. We then add the big app server with Java EE services on top of that. We then add Spring framework on top of that. And yes, before I get any questions, yes, back in 2010, people were actually using Java EE and Spring at the same time. Oh, hey, can I get a shout out for my log4j fans? Oh, okay, too early for log4j jokes. Okay, that’s fine. I mean, who would have known that giving a logger the ability to remotely download an execute code would be a bad thing? Come on.

    All right, we’re seeing our search engine, then we have even more Java libraries because why not? Why not add another 10 jar files, another hundred jar files? It’s all good. And then lots more supporting files to go on top of that. And, finally, we have our business application. So we have an e-commerce Java application with search with inventory with sales with CRM with reporting and support functionality all built in. So you’re asking yourself, why would anybody do that? I would then remind you of Conway’s law, which says that organizations build applications that look like that organization. So that’s how things like this happen.

    Because this is an enterprise application, of course, we have to integrate with things. So we have external web services called via SOAP. We have MQ calls to the back end, and we have multiple back end databases that we’re talking to. So, all told, this application is 12 and a half gigs sitting on the file system.

    Business impact of application

    So here’s our application. We’ve been told: figure it out. So here’s the business impact of this application and what we’ve been told to figure out:

    • Deployment frequency. It’s at least quarterly to really put a new release out. It could be six months or more to get a new release out the door.
    • Because of that, the lead time for changes is at least that long. And we’re not talking about big changes. Even if you want to move a field around or add some text to that kind of thing, three, six months to get that change in, because it all has to go together.
    • Scaling. If you want to scale this thing, you’re going to drop an entire new instance and scale horizontally each time you need to scale it up.
    • Low reliability. Anybody who’s Java understands this, but even the smallest feature function in this application, if it has a memory leak, it’s going to affect the whole application. So one small thing could take down the entire application.
    • And then a more complex developer experience. Because this huge model is sitting all together, the developers can’t test the code locally as easily. It’s not as easy to work with a 12 and a half gig file if you need a test or a particular change. You may have to push code out to a test environment, which means a slower feedback loop, which means a slower process overall. And this complexity is really what’s driving that low deployment frequency and the long lead time for changes.
    • All this means higher cost. It means larger teams. It means more operation costs. It means more hardware. This is the business impact of having this application and why this enterprise architect has been told to do something about it.

    So here’s our application. We, I guess, are going to try to containerize this thing and see what that does for us. Let’s go back to our best practices now.

    The first best practice was: each container should do one thing and do well. No, we are not doing that, because this thing has tons of functionality in it in different processes. Don’t ship development tooling in production. I got no idea. I just got put on this project. I don’t know what developer tooling is or is not in it. I don’t know. Reduce the size of image where possible. This thing’s 12 and a half gigs. What do you expect? Include only what it’s needed to run the application. There’s literally hundreds of jar files. I don’t know which ones of those are actually being used and which ones aren’t. Start simple and get more complicated as needed. Yeah, okay.

    So I’m going to break almost all of these best practices containerizing this application. So the question that becomes, can I stuff this big model that’s into a Docker container? And the answer is a simple yes, you can. A container is simply an isolated process at the end of the day. There is nothing in the OCI specification that says it can only be a certain size, it can only have a certain number of processes, it can only take a certain amount of RAM, any of that kind of stuff.
    You absolutely can put this thing into a container.

    Now did anybody come to the AI/ML workshop yesterday? Got a couple people. So if you were in that workshop, then you already knew the answer to this because we were working with images yesterday that had individual layers that were over 12 gig in size that we were pushing around. Now a whole separate question would be, should those things be that big? That’s a different conversation. But absolutely people are working with images that are this size or bigger on a day-to-day basis. Okay, so here I am. I’ve written a Docker file. I’ve copied the JVMN, I’ve copied the app server in, I’ve copied all these jars, ears, wars into this thing. I’ve copied my application code, I’ve built my application, it’s in a container. Okay, well that was a lot easier than I thought it was. Tip your waiters, have a great day. Thank you. No, there’s still more.

    Benefits revisited

    So let’s move forward. So what are the benefits of doing this? Why did I put this thing into a container in the first place? What is this actually doing for me to put a monolith into a container? Well, let’s go back to the benefits.

    • Containers provide process level isolation, meaning I can pack more things onto the same hardware. Yes, that works even with this monolith.
    • Containers include all the dependencies, so no more it works with my machine. Yes, I could have 10 different versions of the JVM on my machine and if this is running in a container, then that still works for me. My container works with regards to what I have on my host machine.
    • Containers allow easy application moving the distribution. Yes, I could pick up that 12 and a half gigs, go drop it on a different machine and it’s going to run the same way.
    • Containers use the OCI standard. Yes, now my container is useful with all the different cloud providers, private cloud, public cloud, and all the DevOps tooling.
    • And then developers have a better experience working with containers. Yes, even if in monolith, they’re still going to have a better experience than if it’s running locally on the host.

    Modernization

    Okay, so from here, let’s go into monolithic application modernization and containers. Now, if there is one thing you get out of this section, that one thing should be there is no one right way of modernizing an application. It can vary by organization. It can vary due to the application level. You’re going to have to figure out what the different strategies are, how to triage it, what your different patterns are going to be to actually do modernization, and it’s going to have to be for your particular organization, your particular culture, your particular app. I’m going to show you some different pieces, but it’s really up to you to figure out what this is going to be that works for you.

    Let’s talk about some of the modernization strategies. So there’s what’s known as the five R’s that were popularized by Gartner about a decade ago, and we’re going to talk about those. Now, if you go out there and search for modernization strategies on the web, you’re going to see that every consulting company has a version of this out there now. I saw a six R version, I saw a seven R version, I saw a nine R version, I’m sure there’s money to be made if you can come up with a 10 R version of this. But let’s go through these.

    • First is Rehost. This is minimal, no changes, lift and shift. This is pick it up from the hardware it’s on, drop it on something else.
    • The next is ReFactor. So this is light modifications, application, perhaps paying down technical debt.
    • Next we have ReArchitect. Now we’re making significant modifications to the application, splitting or decomposing the application.
    • Then we get to Rebuilding. So we are rewriting the application from the ground up, redesigning the application.
    • And finally, there is also Replace, which we’re not going to cover in this presentation, but it says, you know, if you can replace the entire application with something else, do that.

    All right. So what are the considerations for modernizing the application? If we use these strategies, which strategy or which combination of strategies are going to work for this particular application? So here are some of the things you need to consider.

    • What are the business priorities? What does the business want to do and how does modernization of this application fit into that? So who here has had a business leader stand up and say, our business priority is to modernize this application. The only time I’ve ever seen that that happened is when that application is behaving so badly that it’s causing news on its own. And that’s when the business actually stands up and says something’s got to be done. Otherwise, nobody’s ever going to say that. They’re going to say, we want more customers, we want more profitability, we want to add new features, functionality, that kind of thing. So how does modernizing that application fit to those business priorities?
    • Application knowledge. Is there anybody anywhere that really understands how this application works and how it’s employed? So keep in mind, this thing was built over 10 years ago. Are the people who built it still here? Do they really understand how this thing works? Or is everybody treating it as a black box where they’re tip-toeing around the edges of it and hoping it doesn’t break?
    • Tied to that is the application tech stack. Are you using still the same technologies you were using 13 years ago? Are you still doing Java EE or have you moved on to Golang or something else? Do you still have the competencies to actually work in this application in a detailed way?
    • Application lifespan. Does the business still need this application in its current form or have you moved on from this point? And how does that affect your modernization strategy?
    • Organizational capacity. Your developers, your testers, your operations people have their own day jobs. So where are you going to find the capacity to actually do this modernization? How is that going to work in comparison with everything else you’re supposed to be doing?
    • And finally, cost and risk. There is cost and risk in running a monolith. There’s also cost and risk in any of these modernization strategies. Some of them have very significant cost and risk. These are all things you’ve got away when you’re actually trying to determine how you’re going to modernize the application.

    Rehost

    So let’s start off. We’re going to do the rehost as our first option and see how that goes. So now containerization has paid off for me immediately. I could pick up this container. I could go drop it on any of the major cloud providers. I could put it in a private cloud. It’s trivial for me to do this now. That container just goes up, goes down. I set up a private back channel for my MQ and databases and I’m done. So containerization has paid off already.

    Refactor

    Now, let’s go to refactoring. So now, in this case, we’re going to try to get some easy wins. We’re going to try to pay down a little technical debt, try to make a couple of smart decisions without really impacting the application. So in this case, I’m going to go in and I’m going to change that old SOAP call to rest for the API first off. I’m then going to recognize that putting operational reporting into a business application was never a good idea in the first place. And so I’m going to yank that functionality out and not use it and put it into a third-party reporting tool which runs in a separate container. I’m also going to go through and update a few of these Java libraries and hopefully get up to a version that may be supported by a company or may be supported by the community at this point. Pay down a little bit of that technical debt that’s associated around it. But I’m not making any big changes around what the application is or what it does.

    Re-architect

    All right, re-architecting. So I am going to make some significant modifications at this point. So this is where life gets to be fun. So why split this thing into services? So, you know, we talk about the idea of splitting or decomposing the application. What is that actually providing for me? So let’s talk about some of the characteristics of services and then we’ll talk about some of the benefits of them.

    • So services can be independently deployable. So this means that the services don’t have to rely on other components. They can run and live on their own.
    • They are loosely coupled to other services. So there has to be some kind of intermediary or API to call between them.
    • They’re defined by a business function or capability. So let’s come back to Conway’s law at this point. When you’re defining services, you really need to understand what business function these things are taking care of, what capability, and not what organizational responsibility they are.
    • Single responsibility mode. So they are responsible for one single thing. You’re not combining a bunch of different things into a single service, otherwise you still have a model left.
    • They contain their own business logic and data management. So they understand the business rules around them and they understand the data management for things related to that particular function area.
    • And they communicate with each other via a service or some kind of API or broker.

    Scope

    The next question becomes what is the right scope for a service? So we know we can talk about some terminology here. And people have various terminology for this stuff. This is what I’m using. Certainly if you have your own version, that’s fine.

    • So we have what’s known as a macroservice or a monolith. This is what we’ve been talking about up to this point is that we have this monolith where everything is deployed together. It’s all tightly coupled.
    • We have microservices. Everybody’s heard of microservices. Covers a business component. And independently deployed. Loosely coupled. Communicate via API. Pretty nice. You know, microservices have a lot of functionality and capabilities.
    • But then we have a couple other options. We have many services. So if we want to group multiple microservices into a business function, perhaps via a process, that kind of thing could be known as a miniservice.
    • And then nanoservices. All right. Now we’re going to get down to a particular widget or a particular thing on a page where we’ll make it very fine grain scope and have something that’s very small and deployable.

    So what is the right scope for a service? There is no one right scope. Again, it varies depending upon what you’re trying to do. The right scope is the one that’s just big enough to give you real value and not bigger.

    Benefits of services versus monoliths

    Okay, let’s talk about the benefits of services versus monoliths. A lot of these are very apparent, but we’ll go through them. All right.

    • Smaller independent development teams. So if we have small services, we can have a small team for each one of these. Instead of having 50 or 100 people working on a monolith, I can have five or 10 people working on individual service because these things are being developed independently of each other.
    • That means I have independent release schedules. So now I have faster time to market for my changes. This can be a business advantage if you can do it. If you can get down those release schedules from quarterly or six months down to a week, an hour or less, that’s a huge difference.
    • Scale services independently. If there is a particular service that needs to be scaled and the rest of the application doesn’t, you could scale that one service and not the rest of this. That’s going to save hugely on resource cost.
    • Smaller scope developer experience. Again, we want to make the developer experience better. We want to make it easier for people to do that interlude to be able to work on the local machine to be able to do that testing and be able to get that feedback immediately so they can increase their productivity.
    • Blue green deployments. The idea is that I can have multiple versions of the same service running at the same time and be able to get immediate production feedback on both those versions and see which one works better. Again, if you can do this, it can be a business advantage to be able to do so. It can greatly accelerate your feedback for the business.
    • Reusability of services. This is kind of a white whale in the services area. But, ideally, you could have something like a purchasing service or a tax service or an inventory service and be able to reuse that service over and over again between different applications.
    • Tech agnostic services. Now I don’t have to have everything written in the same language. It does not all have to be Java EE. We can actually have a variety of different technologies, a variety of different frameworks all working together for the application.
    • Better application reliability / better control failure module. Or fail-boots. If we think about it, if we have a particular service that’s failing for whatever reason, that service could fail and we can go into a failure mode, doesn’t mean the rest of the application has to fail. Whereas with the monolith, with the Java memory leak, the whole thing is going down and everybody’s out of capability at that point.

    Challenges and pitfalls

    Okay, so challenges and pitfalls. So I’ve said services are great and wonderful and that everybody should use them everywhere. But are there challenges? Yes.

    Complexity — The first one is a killer. This is an absolute killer. If you do not plan for this, you will suffer complexity. There is technology complexity. There is process complexity. There is organizational complexity that you have to plan for and understand when you’re moving to a services architecture. So let’s go back. Smaller independent development teams — is your organization ready for that? Does it have the maturity to be able to do that? Independent service release schedules. Do you plan for release processes? Can they handle doing that? Scaling services independently, blue-green deployments is your technology stack ready to do this. These are all things you’ve got to think about when you’re actually making these changes.

    Latency — Yes, network hops are real. What I mean by that is when you’re running in a monolith and you’re rendering a page, it is all running within that particular service. Now you could say that particular services are not running very quickly. That’s fine. But it is all within one service. Whereas if you have five microservices or ten microservices or fifty microservices to render that same page, you are taking on the network hops between each one of those particular services. And that is real latency that adds up and that you’re going to have to account for.

    Distributed monoliths — Here’s the third time I’m going to mention Conway’s law in this presentation. So when you are devolving an application into services, those services need to be independent of each other. They need to be loosely coupled. If you do not do that, then what you wind up with is a bunch of services that are highly dependent upon each other, have to be deployed together, are versioned together, and have to work together at all times.
    What that means is that you’ve now got a situation where you have all the negatives of a monolith and all the complexity of a services architecture and you’ve added the two things together. This is a real thing that really happens when people do this. So it’s something you have to keep an eye out for.

    Too fine-grained services — Let’s make everything a service. Let’s make every widget on the page a service. Let’s make every field of service. No. Why? Why would you do that? If this particular feature function that you’re going to make a service does not have real value that is going to be incurred by making a service, then don’t make it a service. Find a bigger scope that makes sense for you. Again, services should be big enough to provide real value to you. And if they don’t, then you need to look at how fine-grained you’re making that scope.

    Technology is the goal — Who here has had a CEO stand up and say, our goal is to get to Kubernetes. Our goal is to run Istio. Nobody. Nobody says that. The reason why nobody says that? Because technology is not the goal. Technology is the means to the goal. Technology is the way you get to the goal. So if you ever are in a situation where people are saying that technology itself is the goal, then you really need to fix that back and try to understand what the business goals are and how you’re actually working with those.

    Strangler figs

    The next question becomes, how do we split a monolithic application into services? So we have this big monolith. We want to go to services. How do we do it? There are a lot of design patterns, a lot of articles, a lot of blogs, a lot of other documentation on how to do this. I’m going to give you one way, but don’t feel that’s the only way out there.

    We’re going to talk about a monolith decomposition strategy called the Strangler Fig pattern. And because of the name, I need to give some background on where it came from and why and that kind of thing. This is from Martin Fowler’s website. You can go there and see this information. So he was on vacation, and he saw strangler figs. Strangler figs are figs that actually seed in the branches of trees. And what they do is they grow down the roots to the ground and they surround the tree that they actually started from, and they choke the tree to death. They basically kill the tree that they grew from.

    Martin saw that as a great metaphor for applications. Instead of rewriting an application from whole, why not actually build new services around it or new systems around it and let those new systems or services slowly strangle the original application? So this became the design pattern that has been used to decompose a monolith into services.

    Identify services

    All right, so we need to identify services first. We need to identify, create the services from a monolith. First thing we’re going to do is identify logical components. What is a grouping within this particular application that makes logical sense? Then we’re going to understand the dependencies between those components. What are those dependencies? How tightly coupled are they? If they are tightly coupled, is there a way to change it to be loosely coupled? We need to find or remove duplication of functionalities between these components, because you will find that some of these components are doing the same things in some cases. Once you’ve done that, we can then create groupings of components that we’re going to call services. We can then determine the granularity of those services anywhere from macro to nano. And then we can have an API or broker to remotely call those services.

    In this example, I’ve done the first few steps of that. Please note, at this point, I have not changed the code in this application at all. What I’ve done is identified the components. I’ve understood the dependencies between those and I’ve done a logical grouping of those components into services. So now I have orders, customers, inventory, search, payments, and support cases.

    The first thing I’m going to do is drop an HTTP proxy in its own container off to the side here. And this is going to allow me to remotely call these services and be able to split them out. I’m then going to pick a particular service that I think is ripe to be the first one to split out. So I’m going to consider payments as one that’s going to be easier and more self-contained to be able to split out. So I’m going to take it, I’m going to put it in its own container. Once I’ve done that, I can then start redirecting requests for payments to that new container. And then I can remove the payments functionality from the old container. Now, it’s important to note, this is the same code that was down here in the main container. I’ve not changed it other than make it something that gets called by an API.

    Taking that a step further, I could go through the rest of these particular services, and I can split them out. So now each one of them is running its own container. And we have the reporting container that I had earlier running here as well. Now one other thing I’m going to note here, I’m not showing it in this particular diagram, but each one of those containers has the full Java EE server, the Spring stuff, all the Java libraries, that kind of thing, because I’m not changing the functionality. I simply split it out to get the benefits of services, but I’m not changing the technology stack.

    Rebuild

    Now we’re going to go to rebuild. We’re going to rewrite, redesign the application. So now we have a fresh sheet of paper. We are going to do it right this time. We’re going to use all the modern stuff. It’s going to be great. If we do this and we start over, we can really just change what the application is, what it does, what frameworks I’m using, all that type of stuff. So by doing this, I can wind up with something that looks like this, for example.

    So now I’ve got a bunch of different services. I’ve got some stuff written in Golang because that’s our current technology stack. I’ve got some existing things for reviews and discussions and social media integrations that are written in PHP. I’ve got a third-party that we use for CRM. I’m going to use it for search. I’m going to use it for reporting. I’ve got an AI/ML chatbot that I’m going to use for customer success, and I’ve got an inventory module that I’m going to use in Rust.

    Now, you know, the sky’s the limit at this point. If I’m rewriting the application from the start, I could just change everything and anything that I want to change. This is obviously the most time-intensive and cost-intensive of the options, but it gives you the most flexibility in what you’re doing. I think you can see each one of those is in its own container.

    Okay, so we’ve gone through each of the different modernization strategies. We’ve not talked about replace because, again, replace, you’re just replacing the whole thing.

    All right, to summarize where we are. There are lots of monolithic applications still out there. Modernization of those applications is going to continue for years to come. Everybody who thinks that they’re all the applications have been modernized at this point. I’m sorry, you’ve not been talking to many of the customers I have.

    There is no one right path to modernization. You have to determine what the best path is for your organization and for your particular application. Containers help with the modernization process regardless where you are on that journey, and containers can help with your application, regardless whether it’s a model with or whether it’s a microservice.

    Thank you for that. I would like to get a microphone if I can in case there are questions that anybody might have. Don’t everybody jump up at once. All right, thanks everybody.

    Q&A

    Questions? Anybody have a monolith they want to talk about?

    Yes, so the question is when we talk about decomposing an application into services, what about the database, what about the data stores, how does that work? There’s a couple of different options to think about there. You can continue to use the databases the way they are assuming that the microservices have enough knowledge to be able to interact with them that way. If there are data rows that are being written that require multiple services that could be able to trick you, you might need some kind of translation layer to make that work. Or option B is you can actually start splitting out the databases into smaller data stores when you actually move those into microservices. Again, it’s hard to say without looking at a specific example, but both those options are possible depending upon what you’re doing.

    I saw another question there. The question is, if I understood correctly, when you modelize or break apart the application into small components, do you still need an operating system for each one of those small components? The answer is yes. Regardless of whether a container is a monolith or a microservice, it still needs the dependencies, including the other libraries, and that kind of thing, to be a container, and then it’s going to run on a Linux kernel or a Windows service. So, yes, those requirements still remain, regardless of what size the application is.

    The next question is does that make it larger overall on the disk space if you break the application apart? The idea with the smaller services is that each one of those shouldn’t have all the same requirements as far as the libraries and that kind of thing. So ideally, it should be smaller. Now, if you don’t worry about the dependencies, if you say, I’m just not going to worry about it, then, yes, all the other components will be larger than the original one was. Yes, but there’s other benefits in creating services beyond just the file system size.

    Any other questions? Yes. Let me restate the question. Why did I pick payments as the first service to break out and what are some of the things to choose a particular service to start with? And the reason I chose payments is I felt that it was more isolated from the rest of the application, because basically a payments call, in my example, that’s being used in one return. So that meant that the API was relatively straightforward, which means it would be easy for me to pick it up and move it out. That was the only criteria I was using, but obviously for your particular applications, there may be other criteria you need to look at as well.

    The next question is, if you break the application up into like 25 different services, then suddenly you have to start worrying about failure states on 25 different services. You have to start worrying about testing across 25 different services. You have to start worrying about, hey, can I run 25 different services on my developer workstation? These are all real concerns. These are all things that you actually have to go and figure out.

    So there is complexity, as I’ve said before, around each time you split the application out into a different service. And that’s something that you’re going to have to think about. And you’re going to have to think about what frameworks you’re going to use, testing frameworks, other process frameworks to handle these types of things. Those are real concerns. I don’t have a simple “do this and it all works” answer for that. But these are things you’ve got to consider as you’re splitting the application. And there’s another reason not to go too fine-grained with the number of services that you’re creating.

    Any more questions? Yes? So the question is around compliance and policies and how does breaking and application into parts affect that? So if we think about, for instance, a payments module and a PCI compliance with that, by having that encoded in each monolith, that means that each monolith will have to comply with that particular standard. And if that standard changes, each one of those applications has to change. Whereas if you have that in a service and have it contained, you have one place where you have to comply with that particular specification. If things change, it changes in one place and everybody gets to reuse it. So there are benefits to doing it that way. But again, there is complexity around that as well that it has to be accounted for.

    Anything else? Stunned them into silence. Okay. Thank you very much.

    Learn more

     

     

     

    This article contains the YouTube transcript of a presentation from DockerCon 2023. “How do I stuff that huge monolithic application into a container?” was presented by Kevin Barfield, Principal Solutions Architect, Docker.

    Find a subscription that’s right for you

    Contact an expert today to find the perfect balance of collaboration, security, and support with a Docker subscription.