Breaking News: Grepper is joining You.com. Read the official announcement!
Check it out

Exploring the Solutions#

Sumit Rawal answered on June 23, 2023 Popularity 1/10 Helpfulness 1/10

Contents


More Related Answers

  • Exploring the Failures#

  • Exploring the Solutions#

    0

    We could create a process that would be replicating data in (near) real-time between EBS volumes spread across multiple availability zones, but that also comes with a downside. Such an operation would be expensive and would likely slow down state retrieval while everything is fully operational. Should we choose lower performance over high-availability? Is the increased operational overhead worth the trouble? The answer to those questions will differ from one use-case to another.

    There is yet another option. We could use Elastic File System (EFS) instead of EBS. But, that would also impact performance since EFS tends to be slower than EBS. On top of that, there is no production-ready EFS support in Kubernetes. At the time of this writing, the efs provisioner is still in beta phase. By the time you read this, things might have changed. Or maybe they didn’t. Even when the efs provisioner becomes stable, it will still be slower and more expensive solution than EBS.

    Maybe you’ll decide to ditch EBS (and EFS) in favor of some other type of persistent storage. There are many different options you can choose. We won’t explore them since an in-depth comparison of all the popular solutions would require much more space than what we have left.

    All in all, every solution has pros and cons and none would fit all use-cases. For good or bad, we’ll stick with EBS for the remainder of this course.

    Now that we explored how to manage static persistent volumes, we’ll try to accomplish the same results using dynamic approach. But, before we do that, we’ll see what happens when some of the resources we created are removed.

    Popularity 1/10 Helpfulness 1/10 Language whatever
    Source: Grepper
    Tags: whatever
    Link to this answer
    Share Copy Link
    Contributed on Jun 25 2023
    Sumit Rawal
    0 Answers  Avg Quality 2/10

    Closely Related Answers



    0

    An alternative solution would be to mount an NFS drive to all the nodes and store the file there. That would provide the guarantee that the file will be available on all the nodes, as long as we do NOT forget to mount NFS on each.

    Another solution could be to create a custom Prometheus image. It could be based on the official image, with a single COPY instruction that would add the configuration. The advantage of that solution is that the image would be entirely immutable. Its state would not be polluted with unnecessary Volume mounts. Anyone could run that image and expect the same result. That is my preferred solution. However, in some cases, you might want to deploy the same application with a slightly different configuration. Should we, in those cases, fall back to mounting an NFS drive on each node and continue using hostPath?

    Even though mounting an NFS drive would solve some of the problems, it is still not a great solution. In order to mount a file from NFS, we need to use the nfs Volume type instead of hostPath. Even then it would be a sub-optimal solution. A much better approach would be to use configMap. We’ll explore it in the next chapter.

    Do use hostPath to mount host resources like /var/run/docker.sock and /dev/cgroups. Do not use it to inject configuration files or store the state of an application.


    Popularity 1/10 Helpfulness 1/10 Language whatever
    Source: Grepper
    Tags: whatever
    Link to this answer
    Share Copy Link
    Contributed on Jun 23 2023
    Sumit Rawal
    0 Answers  Avg Quality 2/10


    X

    Continue with Google

    By continuing, I agree that I have read and agree to Greppers's Terms of Service and Privacy Policy.
    X
    Grepper Account Login Required

    Oops, You will need to install Grepper and log-in to perform this action.