Author: fire_horse

  • Business Analyst

    Job Description :

    Job Duties & responsibilities (List the principal duties. Use concise statements that provide a clear understanding of the level of responsibility, complexity, creativity and analysis performed in this position.)
    • Drive the day to day operations of IFAApps, SCM Tool, and EStorage applications including reviewing data integrity between upstream source system and reports, trend analysis, resolving exceptions and primary point of contact for all data issues faced by the finance user community.
    • Drive Business Finance user queries, clarifications and requests promptly. Provide guidance to users on questions relating to PreGL system.
    • Manage data issues on an end to end basis – from logging an issue to facilitating solution, tracking through the issue to closure.
    • Manage projects in one of the applications in the finance systems space on an end to end basis. Identify areas for improvement, optimisation and standardization in the normal course of work to streamline current processes and improve accuracy, ensure completeness and reduce turnaround time when receiving data feed from upstream systems thru PSGL & Data Ledger.
    • Collate and rationalise all stakeholders’ requirements and walkthrough these requirements with Middle Office Technology (MOT) team.
    • Actively partnering with the MOT in the solutioning process to ensure that the business solutions being implemented are thought through from a long term, standardization, design for no operations, front-end configuration perspective.
    • Assess and propose appropriate testing approach and sign off criteria to stakeholders and get concurrence.
    • Overall UAT management in terms of UAT timeline, batch run/ re-run and managing the tester’s testing result.
    • Prepare proper documentation on the data flow, design, business solutioning and testing approach in accordance with predefined template.
    • Support transitioning of this application to the new platform/new initiatives under implementation

    Required Experience (indicate nature and extent of work experience including number of years required, if applicable)

    1. Quick learner, with positive attitude towards initiating and making change happen
    2. Driven/passionate, self-starter, highly competent and able to work independently
    3. Ability to work in a team consisting of stakeholders from different areas including user community from requirements gathering and testing perspective, technology teams from a development, rollout perspective.
    4. Eye for detail and diligence in documentation
    5. Fundamental accounting knowledge around banking products.
    6. Strong analytical, communication and project management skills
    7. Well versed in Microsoft Excel, PowerPoint, Word/ Office 365 applications.
    8. Knowledge / hands-on experience of People soft General Ledger application would be an added advantage.
    9. 5 to 8 years of period experience of working in the financial services industry
    10. Prior experience of working in a banking industry would be preferred.
    11. Familiar with working in the financial systems space in the DBS organizational context would be an added advantage.

  • Pega Consultant

    Designing and developing Pega BPM applications.

    Performing solution architecture within the Pega PRPC environment.

    Designing class structures, application frameworks, and data models.

    Coordinating with the project team to ensure the business architecture matches the needs of the customer.

    Integrating business databases, legacy systems, and web services.

    Troubleshooting application issues, coding bugs, and bottlenecks.

    Conducting system performance tests.

    Maintaining the security of all system applications.

    KNOWLEDGE AND SKILLS

    Proven work experience as a Pega Developer.

    Advanced knowledge of Pega PRPC 5.x/6.x/7. x.

    Familiarity with J2EE, XML, Java, JSP, and EJB.

    Knowledge of coding languages including Angular JS, Java, and JQuery.

    Knowledge of web technologies including JavaScript, CSS, HTML5, & XML.

    Excellent project management skills.

    Ability to troubleshoot complex software issues.

  • Software Engineer

    Role: Software Engineer.

    As a Software Engineer, you will be working with a talented team to enhance our world-class media management system. You will work in close collaboration with the product management team, UX designers and your scrum team to design and deliver new and innovative customer solutions.

    Skills and Experiences:
    • Knowledge of server-client, network, software application, and database.
    • 3 – 4 yrs experience in either Java, PHP, Perl, Ruby, Python, C, or C++ programming.
    • Hands-on experience with web applications and programming languages such as, but not limited to, HTML, CSS, JavaScript, JQuery and API’s is also required.
    • Should have a good understanding of UI, cross-browser compatibility, general web functions and standards.
    • Hands-on experience in system design, development and operation of web application with RDBMS.
    • Hands-on experience of operations at Unix/Linux systems.
    • Any experience in programming with modern frameworks, web front-end technologies, software application with OSS Middleware/ technology platform, algorithm development, or data processing on Big Data technology platform.

  • DevOps Engineer

    • Experienced on CI, CD and DevOps
    • Experienced in System Administration.
    • Experienced in vendor products implementations.
    • Experienced in OpenShift, Kubernetes, Docker, RHEL..
    • Experienced in manual and automated deployment, including infrastructure provisioning/de-provisioning of resources on Openshift
    • Experienced in Building and operating scalable, fault tolerant, internet facing systems.
    • Should be able Set up and Administering infrastructure, including firewalls, databases, VMs.
    • Experienced in handling network issues.
    • Experienced in incident management activities.
    • Handle Release management activities. Developing and maintaining release related documents, such as release plan, release notes etc.
    • Ensuring quality releases and managing release and configuration change conflicts to resolution.
    • Tracking release and publishing release notes.
    • Investigating and resolving technical issues by deploying updates/ fixes.
    • Set up of test automation and helping team in test automation execution.
    • Daily system monitoring and response to security or usability concerns.
    • Creating and maintaining documentation pertaining to architecture.
    • User access maintenance and administration.
    • Should be good in communication and coordination with multiple teams.

  • Appian Pricing

    https://www.appian.com/pricing/

    https://www.appian.com/pricing/
  • Software Development Engineer ll

    Amazon Web Services (AWS)  Dallas, TX 

    About the job

    Description

    Are you looking for an opportunity to create a new, global product from scratch? AWS is seeking innovative, passionate, results-oriented leader to own software systems, products, and tools to build an exciting new global product. This is a confidential strategic initiative for AWS.

    We are looking for an individual that thrives by creating high-bar engineering cultures, giving customers business intelligence that is timely, accurate, and actionable, and using the intersection of software and data to give their customers “superpowers.”

    We have a team culture that encourages innovation and we expect team members and management alike to take high degree of ownership for their program vision and execution of ideas. Beyond a strong engineering, data storage/modeling, visualization, and front-end background, a successful candidate will have experience successfully leading teams, developing people, and building a scalable infrastructure. We are looking for someone that is a self-starter, someone that approaches complex business questions with data and curiosity, and a person that dives below the surface to identify the root cause and “so what” rather than just superficial trends.

    Basic Qualifications

    • Degree in Computer Science, Engineering, Mathematics, or a related field and 3+ years industry experience
    • Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
    • Working knowledge of software development methodologies like Agile, Scrum, Kanban, or equivalent

    Preferred Qualifications

    • Advanced/Master’s degree in or Computer Science, Engineering, Mathematics or related discipline
    • Meets/exceeds Amazon’s leadership principles requirements for this role
    • Meets/exceeds Amazon’s functional/technical depth and complexity for this role

    Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

    Company – Amazon Web Services, Inc.
    Job ID: A1233512

  • 20 Jan | Intro to Containers & Kubernetes

    If you are interested in learning more about Containers and Kubernetes, we thought you might find the upcoming virtual KubeAcademy workshop of interest: Introduction to Containers and Kubernetes

    20 January 2021
    9:45am SGT | 12:45pm AEDT | 2:45pm NZDT

    https://connect.tanzu.vmware.com/KubeAcademy-Tour-APJ.html

    Key Learnings:

    • How to use Docker files to containerise applications
    • Authoring YAML files and their syntax
    • Kubernetes from a user facing concept perspective
    • Kubectl and its commands/usage
    • The Cloud-Native and design principles of Kubernetes and its architecture

    Olive Power came to VMware through the Heptio acquisition, working with end users on production Kubernetes. Previously Olive spent several years at Red Hat working on the emerging technologies specialist team. Before Red Hat Olive built up 18+ years of experience working on the large-scale enterprise management. Olive also teaches Kubernetes and currently is an Instructor for the KubeAcademy – https://kube.academy. Olive is also proud to be a host for popular podcast “thepodlets” https://thepodlets.io. When not working, Olive is happily outnumbered by her two little boys.

    Abhishek Vijra (aka AV) is a Technology Leader in Cloud Native Application Development, resilient and performant microservice architectures, DevOps practices; with many years experience in delivering business critical systems to multiple high stakes BFSI, Telecom, and Public sector Projects. Extensive experience in leading teams and developing software solutions with different technologies for various businesses in a timely and cost-effective manner.

    Vino Alex has been working with the Kubernetes ecosystem for approximately five years. His primary interest is the performance aspects of workloads running on Kubernetes. Vino Alex is always curious to explore `how things work,` including containers, and share that knowledge with customers and communities to stitch solutions to solve real-world problems. When not exploring things around Kubernetes, he is a (HAM) Amateur Radio enthusiast.

    Thanks for your registration. You will receive a confirmation email with the dial-in details.

    For any questions, please contact lsumi@vmware.com

    Thank you for registering for the virtual KubeAcademy Tour: Introduction to Containers and Kubernetes.
    Date: 20 January 2021
    Time: 9:45am SGT | 12:45pm AEDT | 2:45pm NZDT (2.5 hours)

    Important Steps:
    Please use this link to join the KubeAcademy Virtual Workshop. You will be prompted to create an account and sign-in on Strigo, the platform we will be using to run the Workshop. Please sign-in to Strigo with the email address you used to register for the workshop. When prompted for an Event Token to join the meeting, please use CN4D. The virtual check-in and audio/visual testing will begin at 9:45am SGT with content starting at 10:00am SGT sharp. This workshop will be taught in English.

    Prerequisites:
    Laptop with the latest version of Chrome or FirefoxLinux concepts and command line proficiencyGeneral networking proficiencyShould you have any questions or would like more information about this event, please email us at lsumi@vmware.com.We look forward to seeing you there.Regards,VMware Tanzu

  • Senior DevOps Engineer

    (National Digital Identity)

    Imagine citizens having a common and secure digital identity that will make their lives so much easier, and open doors to a plethora of services for them – allowing citizens to do everything from accessing health prescriptions, completing government transactions to starting a bank account through their smartphone.

    If you are inspired by this vision, we invite you to join our National Digital Identity team. You will collaborate with a team of highly motivated peers to develop and deploy the next-generation National Digital Identity platform that will transform the lives of citizens by enabling secure and seamless delivery of personalized online digital experiences.

    What to expect:
    • Able to lead with authority and influence with positive energy
    • Lead and Understudy and taking over operations support from the Agile Development team
    • Lead collaboration with development & monitoring team in enhancing service resiliency
    • Maintain service resiliency through high levels of automation, to effectively detect/predict/prevent issues in the environment and codebase
    • Adopt a hands-on approach to engineering solutions around the platform – coding is expected
    • Develop, implement and manage processes, automation, best practices, documentation in accordance with required security and ICT policies, standards, guidelines, and procedures
    • Develop and fine-tune incident management processes across teams
    • Develop and operate continuous integration and deployment pipelines
    • Working in highly collaborative teams and building quality environments.
    • Ability to effectively prioritize and execute tasks in a high-pressure, fast-paced, global environment

    How to succeed:
    • Degree or Diploma in Computer Science/Engineering, Information Technology, Communications or other related disciplines
    • Cloud certifications from AWS or Google will be advantageous
    • 2years’ experience in cloud-related operations or implementation
    • Strong knowledge and experience in DevOps automation, containerization, and orchestration using tools
    • Strong scripting skills e.g. Python, Bash, JavaScript, Ruby
    • Strong understanding and practice Agile/Lean projects SCRUM, KANBAN, etc.
    • Strong understanding of virtualization and networking
    • Experience with highly scalable distributed systems
    • Experience with infrastructure and application monitoring, especially in cloud-native monitoring solutions
    • Demonstrate strong ability in analytical & problem-solving skills
    • Familiar with Site Reliability Engineering practices and methodologies will be advantageous
    • Cloud computing deployment and management experience – AWS, Google
    • Knowledge in open source technologies and configurations, Machine Learning technologies, and environments
    • Breadth of knowledge in OS, networking, distributed computing, cloud computing
    • Strong interpersonal & management skills to succeed in a highly collaborative and cross-functional team environment

  • df -h


    The df command is used to show the amount of disk space that is free on file systems. In the examples, df is first called with no arguments. This default action is to display used and free file space in blocks. In this particular case, th block size is 1024 bytes as is indicated in the output.

    The first column show the name of the disk partition as it appears in the /dev directory. Subsequent columns show total space, blocks allocated and blocks available. The capacity column indicates the amount used as a percentage of total file system capacity.

    The final column show the mount point of the file system. This is the directory where the file system is mounted within the file system tree. Note that the root partition will always show a mount point of /. Other file systems can be mounted in any directory of a previously mounted file system. In the example, there are two other file systems, the first in mounted as /home and the second is mounted as /p4.

    In the second example, df is invoked with the -i option. This option instructs df to display information about inodes rather that file blocks. Even though you think of directory entries as pointers to files, they are just a convenience for humans. An inode is what the Linux file system uses to identify each file. When a file system is created (using the mkfs command), the file system is created with a fixed number of inodes. If all these inodes become used, a file system cannot store any more files even though there may be free disk space. The df -i command can be used to check for such a problem.

    The df command allows you to select which file systems to display. See the man page for details on this capability.