Author: fire_horse

  • Leetcode – Maximum Ice Cream Bars

    1833. Maximum Ice Cream Bars

    It is a sweltering summer day, and a boy wants to buy some ice cream bars.

    At the store, there are n ice cream bars. You are given an array costs of length n, where costs[i] is the price of the ith ice cream bar in coins. The boy initially has coins coins to spend, and he wants to buy as many ice cream bars as possible. 

    Return the maximum number of ice cream bars the boy can buy with coins coins.

    Note: The boy can buy the ice cream bars in any order.

    class Solution {
        public int maxIceCream(int[] costs, int coins) {
            int count = 0;
            Arrays.sort(costs);
            for(int i=0;i<costs.length;i++) {
            	if(costs[i]<=coins) {
            		count++;
            		coins -= costs[i];
            	}else{
                    return count;
                }
            }
            return count;
        }
    }
  • Adequate protection against accidental deletion of objects in Amazon S3

    There are a few options that you could consider to provide adequate protection against accidental deletion of objects in Amazon S3:

    1. Use versioning: You can enable versioning for your Amazon S3 bucket, which keeps track of all versions of your objects (including all deletions). This way, if an object is accidentally deleted, you can recover it by restoring the deleted version.
    2. Use object locking: You can enable object locking for your Amazon S3 bucket, which allows you to lock objects so that they cannot be deleted or overwritten for a specified period of time. This can help prevent accidental deletion of objects.
    3. Use cross-region replication: You can set up cross-region replication for your Amazon S3 bucket, which replicates objects to a destination bucket in a different region. This can provide an additional layer of protection against data loss due to accidental deletion or other issues in the primary region.
    4. Use Lifecycle policies: You can use Lifecycle policies to automate the transition of objects to different storage classes or to delete objects that are no longer needed. This can help reduce the risk of accidental deletion by ensuring that objects are only retained for as long as they are needed.
  • Top 10 Cloud Certification

    AWS Certified Cloud Practitioner

    AWS Certified Solutions Architect — Associate

    AWS Architect Professional

    AWS Certified Developer — Associate

    AWS Certified SysOps Administrator — Associate

    Microsoft Certified: Azure Fundamentals

    Microsoft Certified: Azure Solutions Architect Expert

    Microsoft Certified: Azure Administrator Associate

    Google Associate Cloud Engineer

    Google Professional Cloud Architect

  • Tips for studying for the AWS Certified Solutions Architect Associate exam

    Here are a few tips for studying for the AWS Certified Solutions Architect Associate exam:

    1. Review the exam blueprint: The exam blueprint is a detailed outline of the topics and concepts that will be covered on the exam. Reviewing the blueprint can help you focus your studies and ensure that you are thoroughly prepared for the exam.
    2. Use AWS documentation and whitepapers: AWS provides a wealth of documentation and whitepapers that can be helpful for studying for the exam. Make sure to read through these materials and become familiar with the various services and technologies that are covered on the exam.
    3. Take practice exams: There are many practice exams available that can help you prepare for the AWS Certified Solutions Architect Associate exam. These exams can help you get a feel for the types of questions that will be on the exam, as well as identify areas where you may need to focus your studies.
    4. Use online resources: There are many online resources available, such as blogs, forums, and online courses, that can be helpful for studying for the AWS Certified Solutions Architect Associate exam. These resources can provide additional information and guidance to help you prepare.
    5. Attend training courses: AWS offers a range of training courses that can be helpful for studying for the AWS Certified Solutions Architect Associate exam. These courses can provide hands-on experience with the technologies covered on the exam and help you build a solid foundation of knowledge.
  • Best practices to follow when reviewing Appian code:

    Here are a few best practices to follow when reviewing Appian code:

    1. Use a code review tool: A code review tool can help you automate some of the review process, such as checking for coding standards or identifying potential issues. This can save time and help ensure that you don’t miss anything important.
    2. Follow established coding standards: Make sure that the code being reviewed follows established coding standards, such as those outlined in the Appian Code Review Best Practices guide. This will help ensure that the code is easy to read and maintain.
    3. Look for potential issues: Check for potential issues such as security vulnerabilities, performance issues, or bugs. If you find any issues, make sure to document them clearly and provide recommendations for how to address them.
    4. Consider maintainability: Consider the maintainability of the code being reviewed. Is it easy to read and understand? Is it well-documented? Are there any areas where the code could be refactored to make it more maintainable?
    5. Provide constructive feedback: Provide constructive feedback to the developer who wrote the code. Be specific about what you liked and what you would like to see improved, and provide suggestions for how to address any issues you have identified.
  • Writing a Performance Self Review for Software Engineers

    It is understandable that the process of conducting a performance self-review as a software engineer can be challenging and time-consuming. However, it is an important aspect of professional development and can provide valuable insights into areas where you may be excelling and areas where you may have room for improvement.

    One way to approach this task is to focus on your accomplishments and contributions over the past year. This can include specific projects or tasks that you have completed, as well as more general achievements such as learning new skills or technologies.

    It can also be helpful to consider feedback you have received from colleagues or supervisors, and to reflect on how you have applied this feedback to your work. This can provide a more holistic view of your performance and can help you identify areas where you may need to focus your efforts.

    Overall, while it may not be the most enjoyable task, taking the time to conduct a performance self-review can be a valuable opportunity for personal and professional growth.

    Here are some tips for writing a performance self-review as a software engineer:

    1. Identify your accomplishments and contributions: Start by listing out your accomplishments and contributions over the past year. This can include specific projects or tasks you have completed, as well as more general achievements such as learning new skills or technologies.
    2. Reflect on feedback: Consider feedback you have received from colleagues or supervisors, and reflect on how you have applied this feedback to your work. This can provide valuable insights into your strengths and areas for improvement.
    3. Set goals for the future: Use your self-review as an opportunity to set goals for the future. This can include specific goals related to your work as a software engineer, as well as more general goals related to your personal and professional development.
    4. Be honest and objective: Be honest and objective in your self-review. It’s important to be realistic about your strengths and areas for improvement, and to avoid overstating your accomplishments or downplaying your challenges.
    5. Seek feedback from others: Consider seeking feedback from others, such as colleagues or supervisors, to help you get a more well-rounded view of your performance. This can be especially helpful if you have trouble coming up with a complete list of your accomplishments or if you struggle with self-assessment.
  • Tips for how to ask someone to review your work

    Here are a few tips for how to ask someone to review your work:

    1. Be specific about what you would like reviewed: It’s important to be clear about what you are asking the person to review. This might include a specific document, codebase, or other piece of work.
    2. Provide context: It can be helpful to provide some context or background information about the work you are asking the person to review. This can help them understand your objectives and the purpose of the work.
    3. Set a timeline: If you have a deadline for when you would like the review to be completed, it’s important to communicate that to the person you are asking. This can help them prioritize the review and ensure that they are able to meet your timeline.
    4. Be polite and respectful: It’s important to remember that the person you are asking to review your work is doing you a favor, so be sure to thank them in advance and be respectful of their time.
    5. Offer to return the favor: If you would like to ask someone to review your work, it can be helpful to offer to review their work in return. This can create a sense of reciprocity and make the request more palatable for the other person.
  • Lab 4: Setting Up a Data Lake with Lake Formation

    Create a data lake using AWS Lake Formation. Set up an AWS Glue crawler to determine the schema, and then create tables in the AWS Glue Data Catalog.

  • Lab 5: Migrating an On-Premises NFS Share Using AWS DataSync and Storage Gateway

    This lab demonstrates how to use AWS DataSync and an AWS Storage Gateway file gateway to migrate data from an on-premises Network File System (NFS) server to Amazon Simple Storage Service (Amazon S3).

    After completing this lab, you will be able to :

    Deploy and activate a DataSync agent as an Amazon Elastic Compute Cloud (Amazon ECS) instance

    Create a DataSync task to copy data from a Linux-based NFS server to an S3 bucket

    Deploy and activate a Storage Gateway file gateway appliance as an EC2 instance

    Create an NFS file share on a file gateway

    Configure a Linux host to connect to an NFS share on a file gateway

    Task 1 : Connect to the On-Premises NFS Server

    To mount the on-premise NFS share to the client instance, run the following command.

    Replace <NfsServerPrivateIp> with the actual NfsServerPrivateIp

    sudo mount <NfsServerPrivateIp>:/var/nfs /mnt/nfs
    sudo mount 10.10.2.132:/var/nfs /mnt/nfs

    To verify that the NFS file share was mounted successfully, run the following command :

    df - h
    
    Filesystem            Size  Used Avail Use% Mounted on
    devtmpfs              475M     0  475M   0% /dev
    tmpfs                 492M     0  492M   0% /dev/shm
    tmpfs                 492M  392K  492M   1% /run
    tmpfs                 492M     0  492M   0% /sys/fs/cgroup
    /dev/xvda1            8.0G  1.1G  7.0G  14% /
    tmpfs                  99M     0   99M   0% /run/user/1000
    10.10.2.154:/var/nfs  8.0G  1.1G  7.0G  14% /mnt/nfs
    ls /var/nfs
    1.png  10.png  2.png  3.png  4.png  5.png  6.png  7.png  8.png  9.png

    Task 2 : Deploy and Activate a DataSync Agent Instance

    In a physical environment, DataSync can also be deployed as a VMware-based virtual machine.

    Task 3 : Create and Run a DataSync Task

  • Lab 3: Deploying an Application with Amazon ECS on Fargate

    Deploy a web-based application as a Docker container image. After verifying that the image is successfully created, push it to Amazon Elastic Container Registry (Amazon ECR). Then, launch an Amazon Elastic Container Service (Amazon ECS) cluster and create an AWS Fargate profile. Finally, deploy the application to a Fargate cluster.

    docker -v && git --version
    
    Docker version 20.10.13, build a224086
    git version 2.37.1
    pwd
    
    /home/ssm-user
    # or
    /usr/bin
    cd ~
    git clone https://github.com/gabrielecirulli/2048
    ls -l | grep 2048
    
    drwxr-xr-x 6 ssm-user ssm-user 200 Jul 22 23:57 2048

    Task 2 : Containerize the Application

    cd 2048
    vim Dockerfile
    FROM nginx:latest
    
    COPY . /usr/share/nginx/html
    
    EXPOSE 80

    Task 3 : Build the Web2048 Container

    cd ~/2048
    ls -l
    
    -rw-r--r-- 1 ssm-user ssm-user 1970 Jul 22 22:12 CONTRIBUTING.md
    -rw-r--r-- 1 ssm-user ssm-user   59 Jul 22 23:00 Dockerfile
    -rw-r--r-- 1 ssm-user ssm-user 1083 Jul 22 22:12 LICENSE.txt
    -rw-r--r-- 1 ssm-user ssm-user 2280 Jul 22 22:12 README.md
    -rw-r--r-- 1 ssm-user ssm-user  300 Jul 22 22:12 Rakefile
    -rw-r--r-- 1 ssm-user ssm-user 4286 Jul 22 22:12 favicon.ico
    -rw-r--r-- 1 ssm-user ssm-user 3988 Jul 22 22:12 index.html
    drwxr-xr-x 2 ssm-user ssm-user  252 Jul 22 22:12 js
    drwxr-xr-x 2 ssm-user ssm-user  125 Jul 22 22:12 meta
    drwxr-xr-x 3 ssm-user ssm-user   72 Jul 22 22:12 style
    sudo usermod -aG docker ssm-user
    newgrp docker
    docker images
    docker build -t web2048 .
    Sending build context to Docker daemon   1.36MB
    Step 1/3 : FROM nginx:latest
    latest: Pulling from library/nginx
    461246efe0a7: Pull complete
    060bfa6be22e: Pull complete
    b34d5ba6fa9e: Pull complete
    8128ac56c745: Pull complete
    44d36245a8c9: Pull complete
    ebcc2cc821e6: Pull complete
    Digest: sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7
    Status: Downloaded newer image for nginx:latest
     ---> 670dcc86b69d
    Step 2/3 : COPY . /usr/share/nginx/html
     ---> af91f0e53954
    Step 3/3 : EXPOSE 80
     ---> Running in c854197142b7
     ...
    
    ...
    Successfully built 7de26addeed8
    
    docker images
    
    REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
    web2048      latest    7de26addeed8   24 minutes ago   143MB
    nginx        latest    670dcc86b69d   3 days ago       142MB
    docker run -d -it -p 80:80 web2048
    history | grep container
    ctrl + r
    
    (reverse-i-search)`container': docker container ls
    
    # press enter and the "list containers" command runs again.
    
    [ssm-user@ip-10-0-0-55 2048]$ docker container ls
    CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS                               NAMES
    0e5c8fe77af7   web2048   "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   elated_boyd
    
    
    docker container ls
    
    curl http://169.254.169.254/latest/meta-data/public-ipv4 -w "\n"
    
    docker stop CONTAINER ID

    Task 4 : Create an Amazon ECR Repository and Push the Docker Image

    docker images
    
    REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
    web2048      latest    df0b3b5dd073   2 hours ago   143MB
    nginx        latest    670dcc86b69d   3 days ago    142MB
    aws configure
    
    
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name []: <YOUR_REGION>
    Default output format [json]: json
    
    aws ecr create-repository --repository-name web2048
    
    
    {
        "repository": {
            "repositoryUri": "294373654843.dkr.ecr.us-east-1.amazonaws.com/web2048",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            },
            "registryId": "294373654843",
            "imageTagMutability": "MUTABLE",
            "repositoryArn": "arn:aws:ecr:us-east-1:294373654843:repository/web2048",
            "repositoryName": "web2048",
            "createdAt": 1658766018.0
        }
    }
    
    aws ecr describe-repositories --query 'repositories[].[repositoryName, repositoryUri]' --output table
    
    ---------------------------------------------------------------------
    |                       DescribeRepositories                        |
    +---------+---------------------------------------------------------+
    |  web2048|  294373654843.dkr.ecr.us-east-1.amazonaws.com/web2048   |
    +---------+---------------------------------------------------------+
    
    export REPOSITORY_URI=$(aws ecr describe-repositories --query 'repositories[].[repositoryUri]' --output text)
    echo ${REPOSITORY_URI}
    
    export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
    
    export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
    
    aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
    
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    
    docker tag web2048:latest ${REPOSITORY_URI}:latest
    docker images
    
    REPOSITORY                                             TAG       IMAGE ID       CREATED         SIZE
     294373654843.dkr.ecr.us-east-1.amazonaws.com/web2048   latest    07a9581ba9e6   6 minutes ago   143MB
     web2048                                                latest    07a9581ba9e6   6 minutes ago   143MB
     nginx                                                  latest    670dcc86b69d   5 days ago      142MB
    
    docker push ${REPOSITORY_URI}:latest
    
    
    The push refers to repository [294373654843.dkr.ecr.us-east-1.amazonaws.com/web2048]
    9fb7edea8440: Pushed
    abc66ad258e9: Pushed
    243243243ee2: Pushed
    f931b78377da: Pushed
    d7783033d823: Pushed
    4553dc754574: Pushed
    43b3c4e3001c: Pushed
    latest: digest: sha256:daac922f8e9d6d2445c1ee1ab24a36190c0763d766908f7915cfcef53cec0123 size: 1780
    
    aws ecr describe-images --repository-name web2048
    {
        "imageDetails": [
            {
                "artifactMediaType": "application/vnd.docker.container.image.v1+json",
                "imageSizeInBytes": 57719510,
                "imageDigest": "sha256:daac922f8e9d6d2445c1ee1ab24a36190c0763d766908f7915cfcef53cec0123",
                "imageManifestMediaType": "application/vnd.docker.distribution.manifest.v2+json",
                "imageTags": [
                    "latest"
                ],
                "registryId": "294373654843",
                "repositoryName": "web2048",
                "imagePushedAt": 1658767453.0
            }
        ]
    }
    

    Task 5 : Create an ECS Cluster

    aws ecs create-cluster --cluster-name web2048
    {
        "cluster": {
            "status": "ACTIVE",
            "defaultCapacityProviderStrategy": [],
            "statistics": [],
            "capacityProviders": [],
            "tags": [],
            "clusterName": "web2048",
            "settings": [
                {
                    "name": "containerInsights",
                    "value": "disabled"
                }
            ],
            "registeredContainerInstancesCount": 0,
            "pendingTasksCount": 0,
            "runningTasksCount": 0,
            "activeServicesCount": 0,
            "clusterArn": "arn:aws:ecs:us-east-1:294373654843:cluster/web2048"
        }
    }
    
    cd ~
    echo ${REPOSITORY_URI}
    vim web2048_task_definition.json
    {
        "family": "web2048",
        "networkMode": "awsvpc",
        "taskRoleArn": "arn:aws:iam::000000000000:role/test-lab-3-ECSTaskExecutionRole-0000000000000",
        "executionRoleArn": "arn:aws:iam::000000000000:role/test-lab-3-ECSTaskExecutionRole-0000000000000",
        "containerDefinitions": [
            {
                "name": "web2048",
                "image": "000000000000.dkr.ecr.us-east-1.amazonaws.com/web2048:latest",
                "portMappings": [
                    {
                        "containerPort": 80,
                        "hostPort": 80,
                        "protocol": "tcp"
                    }
                ],
                "essential": true
            }
        ],
        "requiresCompatibilities": [
            "FARGATE"
        ],
        "cpu": "256",
        "memory": "512"
    }
    
    aws ecs register-task-definition --cli-input-json file://web2048_task_definition.json
    vim web2048_service.json
    {
        "cluster": "web2048",
        "serviceName": "web2048",
        "taskDefinition": "web2048",
        "loadBalancers": [
            {
                "targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:000000000000:targetgroup/ECS-Target-Group/0000000000000000",
                "containerName": "web2048",
                "containerPort": 80
            }
        ],
        "desiredCount": 2,
        "launchType": "FARGATE",
        "platformVersion": "LATEST",
        "networkConfiguration": {
            "awsvpcConfiguration": {
                "subnets": [
                    "PublicSubnet1",
                    "PublicSubnet2"
                ],
                "securityGroups": [
                    "ecsSecurityGroup"
                ],
                "assignPublicIp": "ENABLED"
            }
        }
    }
    
    aws ecs create-service --cli-input-json file://web2048_service.json
    aws ecs describe-clusters --cluster web2048
    {
        "clusters": [
            {
                "status": "ACTIVE",
                "defaultCapacityProviderStrategy": [],
                "statistics": [],
                "capacityProviders": [],
                "tags": [],
                "clusterName": "web2048",
                "settings": [],
                "registeredContainerInstancesCount": 0,
                "pendingTasksCount": 0,
                "runningTasksCount": 2,
                "activeServicesCount": 1,
                "clusterArn": "arn:aws:ecs:us-east-1:310899899985:cluster/web2048"
            }
        ],
        "failures": []
    }