Select Page
Docker Guide: Deploying Ghost Blog with MySQL and Traefik with Docker

Docker Guide: Deploying Ghost Blog with MySQL and Traefik with Docker

Ghost is a powerful open source publishing and blog platform based on nodejs. It’s well designed and easy to use. The ghost platform is written in JavaScript and uses node.js as a runtime environment. The first Ghost version released in 2013 under the MIT license..

Traefik is modern HTTP reverse proxy and load balancer for microservices. Traefik makes all microservices deployment easy, integrated with existing infrastructure components such as Docker, Swarm Mode, Kubernetes, Amazon ECS, Rancher, Etcd, Consul etc.

In this tutorial, we will show step-by-step how to install and configure Ghost as a Docker container. We will install and configure Ghost under the latest Docker CE version, use the MySQL as a database and use the Traefik as a Reverse Proxy.

Prerequisites

  • Ubuntu 18.04 LTS
  • Root privileges

What we will do

  1. Install Docker CE on Ubuntu 18.04 LTS
  2. Setup Docker for Non-root User
  3. Install Docker Compose
  4. Configure Ghost Stack
  • Create a Custom Network
  • Create a Project Directory
  • Create and Configure MySQL Container
  • Create and Configure Traefik Reverse Proxy
  • Create and Configure Ghost Container
  • Deploy Ghost with MySQL and Traefik
  • Testing
  • Step 1 – Install Docker CE on Ubuntu 18.04 LTS

    The first step we will do in this tutorial is to install the latest docker-ce version of the system. The docker-ce version can be installed from the official docker repository.

    Add the docker key and docker-ce repository.

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
    sudo add-apt-repository
       “deb [arch=amd64] https://download.docker.com/linux/ubuntu
       $(lsb_release -cs)
       stable”

    The command will automatically update all repositories on the system.

    Now install docker using the apt command below.

    sudo apt install docker-ce -y

    After the installation is complete, start the docker service and enable it to launch every time at system startup.

    systemctl start docker
    systemctl enable docker

    The latest docker-ce version has been installed on the Ubuntu 18.04 system.

    Installing Docker CE

    Step 2 – Setup Docker for Non-root User

    In this guide, all container microservices will be run under the normal/non-root user. So we need to configure the user to be able to run the Docker container and run the sudo command for root privileges.

    Create a new user named ‘hakase’ and create the password.

    useradd -m -s /bin/bash hakase
    passwd hakase

    Now assign the ‘hakase’ user to the ‘sudo’ and ‘docker’ groups.

    usermod -a -G root hakase
    usermod -a -G docker hakase

    And restart the docker service.

    systemctl restart docker

    The ‘hakase’ can now run the docker container and run the sudo command for root privileges.

    Setup Docker for Non-root User

    Login as user ‘hakase’ and run the docker hello-world container.

    su – hakase
    docker run -it hello-world

    And following is the result.

    Test docker as non-root user

    Step 3 – Install Docker Compose

    In this tutorial, we will install the docker compose 1.21 from a binary file on Github repository.

    Download the docker-compose binary to the ‘/usr/local/bin’ directory.

    sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

    And make the ‘docker-compose’ file executable by changing the file permission.

    sudo chmod +x /usr/local/bin/docker-compose

    The docker compose has been installed – check it using the command below.

    docker-compose version
    docker version

    Install Docker Compose

    Docker-compose 1.21 with Docker-ce 1.18 has been installed.

    Step 4 – Configure Ghost Stack

    In this step, we will configure the docker and create a new docker-compose file for the ghost installation.

    We will create a new docker custom network and create a new docker-compose yml file that contains three main services, including MySQL database, Traefik reverse proxy, and Ghost blog itself.

    Create a Custom Network

    Show the available docker network using docker network command below.

    docker network ls

    Create a new custom docker network for the traefik reverse proxy named ‘traefiknet’.

    docker network create traefiknet

    Now check again the available network on the docker system.

    docker network ls

    Create a Custom Network

    The custom network for Treafik named ‘traefiknet’ has been created.

    Create a Project Directory

    After creating the docker custom network, we will create a new project directory named ‘ghost’ and create a new docker-compose.yml file.

    Login to the ‘hakase’ user.

    su – hakase

    Create a new ‘ghost’ directory and change the working directory to it.

    mkdir ghost/
    cd ghost/

    And create a new docker-compose file.

    touch docker-compose.yml

    Create and Configure MySQL Service

    MySQL is the first service we want to create, and we want to create the MySQL container with configurations below.

    • We will be using the MySQL 5.7 docker image.
    • Mount the MySQL data directory to the local docker host directory.
    • Running the MySQL service on the local internal network.
    • Configure MySQL user and password.
      • MySQL root password: mypassword
      • Database for the ghost named ‘ghostdb’ with user ‘ghost’ and password is ‘ghostdbpass’
    • The MySQL container will be named as ‘mysql’.

    Inside the ‘ghost’ directory, create a new directory named ‘data’ and edit the ‘docker-compose.yml’ file.

    mkdir -p data/
    vim docker-compose.yml

    Paste the configuration below.

    version: '3.3'
    
    services:
    
      mysql:
        image: mysql:5.7
        restart: always
        volumes:
          - ./data:/var/lib/mysql
        labels:
          - "traefik.enable=false"
        networks:
          - internal
        environment:
          MYSQL_ROOT_PASSWORD: mypassword
          MYSQL_USER: ghost
          MYSQL_PASSWORD: ghostdbpass
          MYSQL_DATABASE: ghostdb
        container_name: mysql

    Save and exit.

    Create and Configure Traefik Reverse Proxy

    After creating the MySQL service, we will create and configure the traefik reverse proxy container.

    Before editing the ‘docker-compose.yml’ script, we need to create a new traefik configuration named ‘traefik.toml’.

    vim traefik.toml

    Paste traefik rule configuration below.

    #Traefik Global Configuration
    debug = false
    checkNewVersion = true
    logLevel = "ERROR"
    
    #Define the EntryPoint for HTTP and HTTPS
    defaultEntryPoints = ["https","http"]
    
    #Define the HTTP port 80 and
    #HTTPS port 443 EntryPoint
    #Enable automatically redirect HTTP to HTTPS
    [entryPoints]
    [entryPoints.http]
    address = ":80"
    [entryPoints.http.redirect]
    entryPoint = "https"
    [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
    
    #Enable Traefik Dashboard on port 8080
    #with basic authentication method
    #hakase and password
    [entryPoints.dash]
    address=":8080"
    [entryPoints.dash.auth]
    [entryPoints.dash.auth.basic]
        users = [
            "hakase:$apr1$hEgpZUN2$OYG3KwpzI3T1FqIg9LIbi.",
        ]
    
    [api]
    entrypoint="dash"
    dashboard = true
    
    #Enable retry sending a request if the network error
    [retry]
    
    #Define Docker Backend Configuration
    [docker]
    endpoint = "unix:///var/run/docker.sock"
    domain = "hakase-labs.io"
    watch = true
    exposedbydefault = false
    
    #Letsencrypt Registration
    #Define the Letsencrypt ACME HTTP challenge
    [acme]
    email = "[email protected]"
    storage = "acme.json"
    entryPoint = "https"
    OnHostRule = true
      [acme.httpChallenge]
      entryPoint = "http"

    Save and exit.

    Now we need to create a new file for the SSL Letsencrypt configuration ‘acme.json’. It’s used to store all letsencrypt generate a log.

    Create the blank ‘acme.json’ file and change the permission to 600.

    touch acme.json
    chmod 600 acme.json

    Next, we will edit the ‘docker-compose.yml’ script and add the traefik service configuration.

    • We will be using the latest traefik docker image.
    • The container will be named as ‘traefik’
    • It’s used the custom network ‘traefiknet’, and expose the HTTP and HTTPS ports.
    • Mount the docker sock file and the traefik configuration ‘traefik.toml’ and ‘acme.json’
    • We defined the traefik dashboard URL and backend through the docker labels.

    Edit the ‘docker-compose.yml’.

    vim docker-compose.yml

    Paste the traefik service configuration below.

      traefik:
        image: traefik:latest
        command: --docker
        ports:
          - 80:80
          - 443:443
        labels:
          - "traefik.enable=true"
          - "traefik.backend=dashboard"
          - "traefik.frontend.rule=Host:traef.hakase-labs.io"
          - "traefik.port=8080"
        networks:
          - traefiknet
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          - ./traefik.toml:/traefik.toml
          - ./acme.json:/acme.json
        container_name: traefik
        restart: always

    Save and exit.

    Create and Configure Ghost Service

    After configuring the traefik reverse proxy, we will create the main ghost service configuration.

    Configuration details that we want to create.

    • We will be using the ghost v1 and the small docker alpine version.
    • We will mount the ghost content directory to the local directory named ‘blog’.
    • The ghost service will be running on the default port with the domain name ‘gho.hakase-labs.io’, the configuration through docker labels.
    • The ghost service will be using two docker networks, internal and traefiknet.
    • We’re configuring the MySQL database details from the mysql container configuration.
    • And the ghost will be started when the traefik and MySQL container is up and running.

    Create a new directory named ‘blog’ and edit the ‘docker-compose.yml’ file.

    mkdir -p blog/
    vim docker-compose.yml

    Paste the configuration below.

      ghost:
        image: ghost:1-alpine
        restart: always
        ports:
          - 2368
        volumes:
          - ./blog:/var/lib/ghost/content
        labels:
          - "traefik.enabled=true"
          - "traefik.backend=ghost"
          - "traefik.frontend.rule=Host:gho.hakase-labs.io"
          - "traefik.docker.network=traefiknet"
          - "traefik.port=2368"
        networks:
          - internal
          - traefiknet
        environment:
          database__client: mysql
          database__connection__host: mysql
          database__connection__user: ghost
          database__connection__password: ghostdbpass
          database__connection__database: ghostdb
        container_name: ghost
        depends_on:
          - mysql
          - traefik

    networks: traefiknet: external: true internal: external: false

    Save and exit.

    Create and Configure Ghost Service

    And now we got all directory and configuration as shown below.

    tree

    config files

    Step 5 – Deploy Ghost with MySQL and Traefik

    To build and run all our ghost stack service, we can use the command below.

    docker-compose up -d

    Deploy Ghost with MySQL and Traefik

    When it’s complete, check all running services.

    docker-compose ps

    And the following is the result.

    docker-compose ps

    If you have an error, check the container log using commands below.

    docker-compose logs mysql
    docker-compose logs traefik
    docker-compose logs ghost

    check the container log

    The ghost stack with MySQL and the Traefik reverse proxy is up and running.

    Step 6 – Testing

    Open the Traefik dashboard with its URL, mine is http://traef.hakase-labs.io/

    Log in with the user and password on the ‘traefik.toml’ file.

    Login to Traefik

    And following is the Traefik dashboard.

    Traefik dashboard

    For the Ghost installation, type the ghost URL on the address bar, mine is http://gho.hakase-labs.io/

    And you will get the ghost home page.

    Ghost Blog running on Docker

    Now visit the admin page to set up and configure a new admin user. My URL is: http://gho.hakase-labs.io/admin/

    Click the green button to create a new admin user.

    Ghost installer

    Type detail user, password, email, and click again the green button.

    Create admin login

    For inviting a new member or user, click the ‘I will do this later..’ link.

    Invite users

    Now you will get the Ghost Dashboard.

    Ghost Dashboard

    And after creating the sample post, the following is the result.

    Ghost glog running in Docker container

    Ghost blog installation with MySQL database and Traefik Reverse Proxy on the Docker environment has been completed successfully.

    References

    About Muhammad Arul

    Muhammad Arul is a freelance system administrator and technical writer. He is working with Linux Environments for more than 5 years, an Open Source enthusiast and highly motivated on Linux installation and troubleshooting. Mostly working with RedHat/CentOS Linux and Ubuntu/Debian, Nginx and Apache web server, Proxmox, Zimbra Administration, and Website Optimization. Currently learning about OpenStack and Container Technology.

    How to install MediaWiki on Ubuntu 18.04 LTS

    How to install MediaWiki on Ubuntu 18.04 LTS

    Mediawiki is a free and open source wiki software that allows you to create your own wiki site. It is written in PHP and uses MySQL/MariaDB database backend. Mediawiki comes with lots of features including, Multilanguage support, User Management, Content management and sharing, Editing, Formatting, Referencing and much more.

    In this tutorial, we will learn how to install Mediawiki with Apache web server on Ubuntu 18.04 server.

    Requirements

    • A server running Ubuntu 18.04.
    • A non-root user with sudo privileges.

    Install LAMP Server

    First, install Apache and MariaDB server using the following command:

    sudo apt-get install apache2 mariadb-server -y

    Once both packages are installed, you will need to add Ondrej PHP repository to your system. You can add it with the following command:

    sudo apt-get install software-properties-common
    sudo add-apt-repository ppa:ondrej/php -y

    Once the repository is installed, update the repository and install PHP along with all the required PHP libraries:

    sudo apt-get update -y
    sudo apt-get install php7.2 libapache2-mod-php7.2 php7.2-common php7.2-mbstring php7.2-xmlrpc php7.2-soap php7.2-gd php7.2-xml php7.2-intl php7.2-mysql php7.2-cli php7.2-mcrypt php7.2-zip php7.2-curl -y

    Once all the packages are installed, open php.ini file with the following command:

    sudo nano /etc/php/7.2/apache2/php.ini

    Make the following changes:

    memory_limit = 256M
    upload_max_filesize = 100M
    max_execution_time = 360
    date.timezone = Asia/Kolkata
    

    Save and close the file, then start Apache and MariaDB service and enable them to start on boot time:

    sudo systemctl start apache2
    sudo systemctl enable apache2
    sudo systemctl start mysql
    sudo systemctl enable mysql

    Configure MariaDB

    First, secure MariaDB installation with the following command:

    sudo mysql_secure_installation

    Answer all the questions as shown below:

        Enter current password for root (enter for none):
        Set root password? [Y/n]: N
        Remove anonymous users? [Y/n]: Y
        Disallow root login remotely? [Y/n]: Y
        Remove test database and access to it? [Y/n]:  Y
        Reload privilege tables now? [Y/n]:  Y
    

    Once the MariaDB is secured, log in to MariaDB shell:

    mysql -u root -p

    Enter your root password when prompt, then create a database and user for Mediawiki:

    MariaDB [(none)]>CREATE DATABASE mediadb;
    MariaDB [(none)]>CREATE USER ‘media’@’localhost’ IDENTIFIED BY ‘password’;

    Next, grant all the privileges to the mediadb with the following command:

    MariaDB [(none)]>GRANT ALL ON mediadb.* TO ‘media’@’localhost’ IDENTIFIED BY ‘password’ WITH GRANT OPTION;

    Next, flush the privileges and exit from the MariaDB shell:

    MariaDB [(none)]>FLUSH PRIVILEGES;
    MariaDB [(none)]>EXIT;

    First, download the latest version of Mediawiki from their official website:

    wget https://releases.wikimedia.org/mediawiki/1.31/mediawiki-1.31.0.tar.gz

    Once the download is completed, extract the downloaded file with the following command:

    tar -xvzf mediawiki-1.31.0.tar.gz

    Next, copy the extracted directory to the Apache root directory and give proper permissions:

    sudo cp -r mediawiki-1.31.0 /var/www/html/mediawiki
    sudo chown -R www-data:www-data /var/www/html/mediawiki
    sudo chmod -R 777 /var/www/html/mediawiki

    Next, create an Apache virtual host file for Mediawiki with the following command:

    sudo nano /etc/apache2/sites-available/mediawiki.conf

    add the following lines:

    
    ServerAdmin [email protected]
    DocumentRoot /var/www/html/mediawiki/
    ServerName example.com
    
    Options +FollowSymLinks
    AllowOverride All
    
    ErrorLog /var/log/apache2/media-error_log
    CustomLog /var/log/apache2/media-access_log common
    

    Save the file, then enable virtual host file and Apache rewrite module with the following command:

    sudo a2ensite mediawiki.conf
    sudo a2enmod rewrite

    Finally, restart Apache web server to make the changes:

    sudo systemctl restart apache2

    Now, open your web browser and type the URL http://example.com. You will be redirected to the following page:

    MediaWiki Installer

    Now, click on the set up the wiki button. You should see the following page:

    Choose language

    Here, choose your wiki language and click on the Continue button. You should see the following page:

    Installation environmnet check

    Now, click on the Continue button. You should see the following page:

    Database details

    Now, provide your database details and click on the Continue button. You should see the following page:

    Select database type

    Now, select storage engine and click on the Continue button. You should see the following page:

    Set a site name, username and password

    Now, provide your wiki site name, username and password. Then, click on the Continue button. You should see the following page:

    MediaWiki Settings

    MediaWiki Settings - page 2

    Now, mark all your required settings and click on the Continue button. You should see the following page:

    Start MediaWiki installation

    Now, click on Continue button to start the installation. Once the installation is completed. You should see the following page:

    Installation finished

    Now, click on the Continue button. You should see the following page:

    DownloadlocalSettings.php

    Here, you need to download the LocalSettings.php file and put it on MediaWiki root directory.

    Now, open your web browser and type the URL http://example.com. You should see your MediaWiki site in the following image:

    MediaWiki start page

    Hitesh Jethva

    About Hitesh Jethva

    Over 8 years of experience as a Linux system administrator. My skills include a depth knowledge of Redhat/Centos, Ubuntu Nginx and Apache, Mysql, Subversion, Linux, Ubuntu, web hosting, web server, Squid proxy, NFS, FTP, DNS, Samba, LDAP, OpenVPN, Haproxy, Amazon web services, WHMCS, OpenStack Cloud, Postfix Mail Server, Security etc.

    CoreSite Successfully Completes Annual Compliance Examinations

    CoreSite Successfully Completes Annual Compliance Examinations

    DENVER – CoreSite Realty Company (NYSE:COR), a premier supplier of protected, dependable, high-performance information middle and interconnection answers around the U.S., as of late introduced it has effectively finished its annual compliance examinations for the colocation services and products introduced throughout 17 working multi-tenant information facilities in its portfolio. The of entirety of those annual examinations uniquely positions CoreSite to offer its shoppers a constant and complete method to compliance requirements as a part of its general buyer worth proposition. The of entirety of those more than a few compliance goals additionally demonstrates CoreSite’s ongoing dedication to operational excellence and buyer revel in, enabling its greater than 1,350 shoppers to satisfy business same old compliance necessities. Along with enterprise-class colocation infrastructure, CoreSite supplies controls over bodily get admission to and environmental programs that area its shoppers’ essential information programs and {hardware}. CoreSite effectively finished the next annual examinations:

    VPSrv Premium Hosting

    • Device and Group Controls (SOC) 2 Sort 2 exam
    • SOC 1 Sort 2 exam
    • Global Group for Standardization (ISO) Knowledge Safety Control Device (ISMS) certification (ISO 27001)
    • Cost Card Trade (PCI) Knowledge Safety Same old (DSS) validation
    • Well being Insurance coverage Portability and Responsibility Act (HIPAA) attestation

    The SOC 1 and SOC 2 examinations are attestation requirements issued by means of the American Institute of Qualified Public Accountants (AICPA), and each stories were issued underneath Commentary on Requirements for Attestation Engagements (SSAE) No. 18, which is the brand new AICPA same old for SOC stories. SOC 2 is measured the usage of a standardized set of standards, necessities, and controls; while, SOC 1 is measured towards company-defined keep an eye on goals and underlying controls. The examinations supply CoreSite shoppers with the peace of mind of company controls, together with controls on the subject of bodily and environmental safety, buyer make stronger, and operational excellence. Corporations with compliance necessities might require SOC 1 or SOC 2 exam stories, together with publicly traded enterprises, monetary companies, and healthcare organizations.

    ISO 27001 is an across the world known same old that outlines the necessities for developing a risk-based framework to start up, put in force, handle, and organize knowledge safety inside of a company. The ISO 27001 certification, some of the stringent certifications for info safety controls, confirms that specified knowledge safety controls and different types of threat remedy are in position to discover and shield towards attainable knowledge safety threats and vulnerabilities. The certification additionally guarantees that the tips safety controls proceed to satisfy bodily safety wishes on an ongoing foundation. The scope of the ISO 27001 certification is appropriate to the tips safety control gadget (ISMS) supporting CoreSite’s provision and operation of 24×7 colocation services and products for its shoppers, and covers each its company insurance policies and procedures in addition to the ones of its working information facilities.

    The PCI-DSS is a vast set of requirements that require traders and repair suppliers that handle or host programs that retailer, procedure, or transmit buyer fee card information to stick to strict knowledge safety controls and processes. As a supplier of information middle colocation services and products, CoreSite has proactively met the related necessities for its industry in make stronger of the PCI compliance wishes of its shoppers. The 2017 PCI-DSS file has been issued underneath model 3.2.

    HIPAA calls for that lined entities and industry friends take sturdy measures to give protection to the privateness and safety of safe well being knowledge. Through achieving an attestation towards the HIPAA Safety Requirements for the Coverage of Digital Safe Well being Knowledge (“HIPAA Security Rule”) and the Notification within the Case of Breach of Unsecured Safe Well being Knowledge enacted as a part of the American Restoration and Reinvestment Act of 2009 (“HITECH Breach Notification Requirements”), CoreSite supplies assurance to healthcare business stakeholders that its information middle colocation services and products meet the HIPAA Safety Rule and HITECH Breach Notification necessities essential to give protection to a lined entity’s bodily hosted knowledge programs in CoreSite’s nationwide platform of multi-tenant information facilities.

    All the above examinations and checks have been carried out by means of Schellman & Corporate, LLC, an impartial CPA and Certified Safety Assessor (QSA) company.

    About CoreSite
    CoreSite Realty Company (NYSE:COR) delivers protected, dependable, high-performance information middle and interconnection answers to a rising buyer ecosystem throughout 8 key North American markets. Greater than 1,350 of the sector’s main enterprises, community operators, cloud suppliers, and supporting carrier suppliers make a choice CoreSite to attach, give protection to and optimize their performance-sensitive information, programs and computing workloads. Our scalable, versatile answers and 450+ devoted staff constantly ship unequalled information middle choices — all of which results in a best-in-class buyer revel in and lasting relationships. For more info, discuss with www.CoreSite.com.

    VMware Cloud on AWS Expands to Asia-Pacific, Delivers New Enterprise Capabilities

    VMware Cloud on AWS Expands to Asia-Pacific, Delivers New Enterprise Capabilities

    LAS VEGAS, NV (VMworld 2018) – VMware, Inc. (NYSE: VMW), introduced that VMware Cloud on AWS is now to be had in Amazon Internet Products and services’ (AWS) Asia-Pacific (Sydney) area. VMware additionally introduced new functions that assist organizations world wide to all of a sudden migrate programs and knowledge facilities to an intrinsically safe cloud provider that meets venture software wishes.

    VMware Cloud on AWS is an on-demand provider that reduces the fee and energy related to migrating programs to the cloud through handing over infrastructure and operations which are in keeping with the ones deployed inside buyer knowledge facilities, and increasing equipment, processes, and practices confirmed to improve essentially the most difficult programs. VMware Cloud on AWS is delivered, offered, and supported through VMware and its spouse neighborhood and brings VMware’s enterprise-class Instrument-Outlined Knowledge Middle tool to the AWS Cloud, enabling shoppers to run manufacturing programs throughout VMware vSphere-based hybrid cloud environments, with optimized get admission to to the breadth and intensity of AWS services and products and powerful crisis coverage.

    VPSrv Premium Hosting
    “VMware Cloud on AWS is now available across multiple regions globally including the U.S., Europe, and now Asia-Pacific, and we’ve increased the rate at which we deliver innovative capabilities and powerful solutions for our customers,” stated Mark Lohmeyer, senior vice chairman and common supervisor, Cloud Platform Industry Unit, VMware. “VMware Cloud on AWS provides a fast and cost-effective way to migrate mission-critical applications, or even entire data centers, to the cloud. And once in the cloud, we provide the industry-leading Software-Defined Data Center capabilities of VMware, coupled with the elasticity, breadth, and depth of AWS infrastructure and services, making it the ideal platform for modern applications.”

    “Customers have been asking us to bring VMware Cloud on AWS to Asia Pacific, and we are pleased to be doing that today,” stated Sandy Carter, vice chairman, EC2 Home windows Undertaking Workloads, Amazon Internet Products and services. “VMware Cloud on AWS is the only hybrid cloud service that allows vSphere customers to leverage consistent infrastructure across on-premises data centers and AWS, allowing them to migrate current and new workloads to the cloud with the most functionality, greatest agility, and best security and performance. VMware Cloud on AWS enables customers to save costs, while also enabling them to scale depending on their application needs.”

    The most recent VMware Cloud on AWS updates come with:

    50 p.c decrease entry-level value and new minimal configuration: VMware will scale back the access value for VMware Cloud on AWS through 50 p.c and be offering a smaller 3-host minimal SDDC configuration as a kick off point for manufacturing workloads. For a restricted time, VMware will be offering the 3-Host SDDC atmosphere for the price of a 2-Host configuration.

    License optimization for venture programs (Oracle/Microsoft): With new customized CPU core depend functions, shoppers will have the ability to specify simply the collection of CPU cores they want, decreasing the price of working mission-critical programs which are authorized consistent with CPU core. With VM-Host Affinity shoppers will have the ability to pin workloads to a particular host workforce to improve licensing necessities.

    Quick Knowledge Middle evacuation with are living migration of 1000’s of VMs: Consumers will have the ability to are living migrate hundreds of VMs with 0 downtime and agenda precisely when to chop over to the brand new cloud atmosphere with VMware NSX Hybrid Attach (up to now referred to as VMware Hybrid Cloud Extension) powered through vMotion and vSphere Replication. VMware is providing a unfastened migration value overview with VMware Price Perception as a part of the core provider to lend a hand with cloud migration making plans.

    New high-capacity garage choice, sponsored through Amazon Elastic Block Retailer (Amazon EBS): Consumers will have the ability to independently scale compute and garage useful resource necessities and scale back prices for storage-capacity difficult workloads with new clusters for storage-dense environments. Those clusters ship scalable garage functions with VMware vSAN using Amazon Elastic Block Garage and run on new Amazon Elastic Compute Cloud (Amazon EC2) R5.steel cases. Amazon EC2 R5.steel cases are in line with 2.five GHz Intel Platinum 8000 sequence (Skylake-SP) processors. Each and every host has two sockets, 48 cores, 96 hyper-threads, 768 GB RAM, and 25 Gbps community bandwidth.

    Utility-centric safety with VMware NSX: Consumers will acquire granular keep watch over over east-west visitors between workloads working in VMware Cloud on AWS via micro-segmentation equipped through NSX. Safety insurance policies can also be explained in line with workload attributes (e.g., VM names, OS variations) and user-defined tags, are dynamically enforced on the VM-level, and practice workloads anywhere they’re moved.

    NSX/AWS Direct Attach integration for simplified, high-performance connectivity: This new integration will make it more straightforward for patrons to attach throughout hybrid cloud environments and beef up community functionality. Integration between NSX and AWS Direct Attach will permit personal and constant connectivity between VMware workloads working on VMware Cloud on AWS and the ones working on-premises. This integration can even boost up migration to cloud and permit multi-tier hybrid programs.

    Optimized value/functionality with autoscaling: Elastic DRS lets in customers to automate VMware Cloud on AWS cluster scaling. Elastic DRS permits computerized scaling up or cutting down of hosts and rebalancing of clusters, in line with the desires of the programs and the insurance policies the client defines.

    Actual-Time log control integrated at no further value: VMware has added VMware Log Intelligence to the core VMware Cloud on AWS provider, offering shoppers with get admission to to VMware Cloud on AWS audit logs for greater safety and compliance at no further value.

    Unexpectedly Rising Ecosystem Validates Call for for VMware-Based totally Hybrid Cloud
    VMware companions are serving to shoppers deploy hybrid cloud environments the use of VMware Cloud on AWS. In lower than six months since launching the growth of the VMware Spouse Community to permit resolution suppliers, controlled provider suppliers (MSPs), and gadget integrators to develop their hybrid cloud trade with VMware Cloud on AWS, just about 150 companions globally have accomplished their VMware Cloud on AWS Resolution Competency. Moreover, the collection of answers from VMware Era Alliance Companions which were examined and validated for VMware Cloud on AWS has greater 4x to just about 100 answers. Learn this weblog for extra main points at the increasing spouse ecosystem for VMware Cloud on AWS.

    Consumers Globally Are Adopting VMware Cloud on AWS as their Hybrid Cloud of Selection
    a2z, Inc. supplies robust cloud-based exposition, convention and assembly control and advertising answers that assist spice up earnings for organizers and improve price for match individuals. “We started looking at VMware Cloud on AWS when we wanted to move to a full cloud environment but were concerned about the time and effort required to do a full cloud migration,” stated Ramon Castro, vice chairman of IT, a2z. “VMware Cloud on AWS was a perfect match for us. We could evacuate our existing data center environment and seamlessly move into a new VMware cloud environment without rearchitecting our entire solution. Even better, we would have cloud adjacency whereby we could immediately start taking advantage of the capabilities AWS offered while maintaining our current VMware environment.”

    GenPro is a U.S. chief in expedited transportation answers for perishable and non-perishable commodities. “Disaster recovery was not economically possible for us before VMware Cloud on AWS,” stated Ari Weinstock, director of IT, GenPro. “Now have a cost-effective disaster recovery solution that provides dynamic resource scaling to help us meet our recovery time objectives. VMware Cloud on AWS also provides us with a consistent infrastructure across our hybrid cloud environment.”

    ME supplies private and trade banking services in Australia. “One of ME’s key focuses is on consistently delivering frictionless, personalized banking services that meet our customer promises,” stated Sunny Avdihodzic, common supervisor, Technique and Structure, ME. “As we explore the VMware Cloud on AWS technology and its capabilities, we’re excited by the potential opportunity to enable development and deployment of new services and resources faster than ever before, while mitigating the risk to our business. The unique platform abilities are expected to give us a greater level of flexibility, enabling our people to innovate more while meeting the security and governance standards expected in our industry.”

    Stagecoach is a number one public delivery corporate in the United Kingdom that runs over 11,000 trains and buses, transporting over Three million passengers an afternoon. “VMware Cloud on AWS supports our new cloud-first strategy by providing Group Technology and Change with a VMware environment in the cloud to support our business-critical applications,” stated Lesley Ashman, Leader Knowledge Officer, Stagecoach Team. “With VMware Cloud on AWS we have gained new levels of agility, scale, and resiliency through a multi-Availability Zone deployed platform. VMware Cloud on AWS mitigated our risk of moving business-critical apps to the cloud because we could leverage a consistent infrastructure and operational model.”

    About VMware
    VMware tool powers the sector’s advanced virtual infrastructure. The corporate’s compute, cloud, mobility, networking and safety choices supply a dynamic and environment friendly virtual basis to over 500,000 shoppers globally, aided through an ecosystem of 75,000 companions. Headquartered in Palo Alto, California, this yr VMware celebrates two decades of step forward innovation reaping benefits trade and society.

    Cloudian Raises $94 Million in Funding

    Cloudian Raises $94 Million in Funding

    SAN MATEO, CA – Cloudian introduced that it raised $94 million in a Collection E investment, bringing the corporate’s general investment to $173 million. The spherical comprises participation from buyers Virtual Alpha, 8 Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Publish Funding Company), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments. Cloudian will use this funding, which is the most important unmarried spherical up to now for a disbursed record methods and object garage supplier, to extend its international gross sales and advertising and marketing efforts and building up its engineering crew to satisfy emerging call for for its limitlessly-scalable undertaking garage answers.

    “Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

    VPSrv Premium Hosting
    In step with IDC, the global undertaking garage marketplace grew by way of 34.4% year-over-year within the first quarter of 2018, attaining over $52 billion in annualized income. Cloudian’s world undertaking garage material meets this rising call for with a software-defined-storage platform that transforms same old servers and digital machines right into a pool of logical garage sources that may be co-located with records resources and knowledge customers, whether or not in bodily records facilities, at faraway websites, or within the cloud. Scalable to masses of petabytes and past, the Cloudian structure creates a world federation of garage property to shape a hyperscale material that removes the limits of conventional garage, permitting data sources to be transparently pooled and shared over distance.

    “Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” stated Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that supports the next generation of connected devices.”

    International companies in data-intensive verticals equivalent to media, healthcare and production create and eat huge amounts of information at masses of places around the group. Cloudian meets those disbursed garage wishes with a peer-to-peer useful resource material and a unmarried control framework that spans Cloudian garage home equipment, business same old X86 servers operating Cloudian utility, and public cloud garage. The result’s easy, environment friendly records control around the world garage panorama.

    “For too long, enterprise storage users have settled for solutions that offer incrementally more performance or scale without fundamentally addressing the challenge of global data management,” stated Daniel Auerbach, senior managing spouse at 8 Roads Ventures. “When Eight Roads Ventures first invested in Cloudian in 2014 we saw a different approach – here was a company applying cloud-scale technologies to the enterprise storage challenge. This, our third round of investment, affirms our belief in Cloudian’s innovative approach and next stage of growth.”

    Not too long ago added Cloudian shoppers come with public well being businesses in the United States and UK, two of the highest 5 Formulation One groups, a US nationwide analysis lab, a web based shuttle marketplace chief, a most sensible 3 pharmaceutical corporate, a most sensible 3 world automobile maker, a most sensible 5 Ecu financial institution, an Ivy League college, and one of the vital global’s greatest world engineering corporations.

    “Global 2000 customers in media, automotive, manufacturing, healthcare, and government look to Cloudian to manage their rapidly growing information assets, a trend that we see only accelerating,” stated Edouard Hervey, managing director at Goldman Sachs. “We believe Cloudian is well-positioned to dominate the next generation of enterprise storage with its elegantly simple design that integrates both the data center and cloud environments.”

    “With long-standing roots in the Silicon Valley, our firm has represented over 3,000 private companies and early-stage startups, giving us a unique perspective on the success factors found in high-growth firms,” stated Larry Sonsini, Cloudian investor and senior and founding spouse of Wilson Sonsini Goodrich & Rosati. “WS Investments chose to work with the Cloudian management team led by Michael Tso because they exhibit the markers of long-term success with a strong, integrity-driven culture and an innovative solution to the vital challenge of global data management.”

    Not like conventional garage answers whose architectures have been derived from stand-alone methods that perform inside of a unmarried records middle, Cloudian’s structure was once constructed on cloud applied sciences that have been designed for disbursed environments and countless scale.

    “There will be 20 billion connected devices by 2020, creating a compelling need for data management solutions that are architected for geo-distribution and cloud integration,” stated Gregory M. Bryant, Intel’s senior vice chairman and normal supervisor of the Shopper Computing Workforce and Cloudian board member. “Cloudian’s global data fabric architecture lets customers manage data organization-wide from a single console, so they can capitalize on the next generation of connected computing.”

    The Collection E investment features a $25 million funding from Virtual Alpha that was once first introduced in February.

    About Cloudian
    Cloudian turns data into perception with a hyperscale records material that we could shoppers retailer, in finding and offer protection to records around the group and around the world. Cloudian records control answers deliver cloud era and economics to the knowledge middle with uncompromising records sturdiness, intuitive control equipment, and the business’s maximum suitable S3 API. Cloudian and its ecosystem companions lend a hand International 1000 shoppers simplify unstructured records control nowadays, whilst getting ready for the knowledge calls for of AI and system finding out day after today.