Cloud Connector Worker Node Deployment

Part 1: Enable SSH for Primary Cloud Connectors on Site 1 and Site 2

Part 1: Enable SSH for Primary Cloud Connectors on Site 1 and Site 2

Step 1: Enable SSH for Primary Cloud Connectors on Site 1

Step 1: Enable SSH for Primary Cloud Connectors on Site 1
  1. From the ControlCenter Desktop
    • Launch Google Chrome Site1 profile
    • In the address Bar
      • enter https://vcenterXX-01a.euc-livefire.com
        • where XX is your assigned POD ID
    • Under Getting started page of vCenter
    • Click Launch VSPHERE CLIENT (HTML 5)
  1. In the vSphere Client
    • Expand vcenter-01a.euc-livefire.com > Region01A > Bangalore
    • Locate the primary Cloud Connector: HznCCxx-01a
      • Where xx is the attendee Identifier. In our example it is 22
    • From the Summary Tab
      • LAUNCH WEB CONSOLE
      • The Console Session Opens in a new tab.
  1. In Cloud Connector Console
    • Login user: root
    • Password: VMware1!VMware1!
  1. In Cloud Connector Console, activate ccadmin
    • Type passwd ccadmin
      • using your keyboard press enter
    • New Password: VMware1!VMware1!
      • using your keyboard  press enter
    • Retype new password: VMware1!VMware1!
      • using your keyboard  press enter
  1. In Cloud Connector Console
    • Execute the following command
      • /opt/vmware/bin/configure-adapter.py --sshEnable

Step 2: Enable SSH for Primary Cloud Connectors on Site 2

Step 2: Enable SSH for Primary Cloud Connectors on Site 2
  1. From the ControlCenter Desktop
    • Launch Google Chrome Site2 profile
    • In the address Bar
      • Enter https://vcenterXX-02a.euc-livefire.com
        • Where XX is your assigned POD ID
    • Under Getting started page of vCenter
    • Click Launch VSPHERE CLIENT (HTML 5)
  1. In the vSphere Client
    • Expand vcenterXX-02a.euc-livefire.com > Region02A > Seattle
    • Locate the primary Cloud Connector: HznCCxx-02a
      • Where xx is the attendee Identifier. In our example it is 22
    • From the Summary Tab
      • LAUNCH WEB CONSOLE
      • The Console Session Opens in a new tab.
  1. In Cloud Connector Console
    • Login user: root
    • Password: VMware1!VMware1!
  1. In Cloud Connector Console, activate ccadmin
    • Type passwd ccadmin and press enter
    • New Password: VMware1!VMware1! and press enter
    • Retype new password: VMware1!VMware1! and press enter
  1. In Cloud Connector Console
    • Execute the following command
      • /opt/vmware/bin/configure-adapter.py --sshEnable

Part 2: Deploy  And Configure Worker Node on Site 1

Part 2: Deploy  And Configure Worker Node on Site 1

Step 1: Deploy Worker Node on Site 1

Step 1: Deploy Worker Node on Site 1
  1. From the ControlCenter Desktop
    • Launch Google Chrome Site1 profile
    • In the address Bar
      • enter https://vcenterXX-01a.euc-livefire.com
        • Where XX is your assigned POD ID
    • Under Getting started page of vCenter
    • Click Launch VSPHERE CLIENT (HTML 5)
  1. In the vSphere Client
    • Expand vcenterXX-01a.euc-livefire.com > Region01A > Bangalore
      • Right Click on esxi-01a.euc-livefire.com
        • Click Deploy OVF Template
  1. In the Deploy OVF Template Window
    • Under Select an OVF Template
      • Select Local File radio button
        • Click on UPLOAD FILES
          • From the upload window
            • Navigate to Desktop > Software > Horizon
              • Select horizon-cloud-connector-2.2.0.0-19666316_OVF10.ova
                • Click Open
          • Click Next
  1. In the Select a name and folder Window
    • Virtual machine name: HznCCxx-01b
      • Where xx is the attendee Identifier. In our example it is 22
    • In Select a location for the virtual machine Section
      • Select Region01A
      • Click Next
  1. In Select a compute resource Window
    • Expand by clicking Region01A > Bangalore
      • Select esxi-01a.euc-livefire.com
      • Click Next
  1. In the Review details Window
    • Click Next
  1. In the License agreements Window
    • Click on I accept all license agreements check box
    • Click Next
  1. In Select storage Window
    • Select CorpLun01a
    • From Select virtual disk format dropdown
      • Select Thin Provision
      • Click Next
  1. In the Select networks Window
    • From Destination Network dropdown
      • Select VM Network
      • Click Next
  1. In the Customize template Window
    • Under Application Section which is denoted as 1
      • Root Password : VMware1!VMware1!
      • Confirm Password : VMware1!VMware1!
      • Worker Node : Check the tick box to mark this appliance as worker node.
      • Public key for ccadmin user  : Copy the content from id_rsa.pub by opening the file on Notepad or Notepad++
        • Note:
          • This is the public key we generated using command prompt in Part 1
          • Public key should will from  ssh-rsa
          • The location of the id_rsa.pub file
            • C:\Users\Administrator\.ssh
    • In the Network Section which is denoted as 2
      • Leave it default
      • Click on Network to collapse the Network section
    • In the Proxy Section which is denoted as 3
      • Leave it default
      • Click on Proxy to collapse the Proxy section
      • Scroll Down to reach to Network Properties
    • In the Network Properties Section which is denoted as 4
      • Default Gateway : 192.168.110.1
      • Domain Name: euc-livefire.com
      • Domain Search Path: euc-livefire.com
      • Domain Name Servers: 192.168.110.10
      • Network IP Address: 192.168.110.14
      • Network Netmask: 255.255.255.0
      • Click Next
  1. In the Ready to Complete Window
    • Verify all the properties
    • Click Finish
  1. Notice the import OVF package task is running
    • Under vCenter > Recent Tasks
    • This may take 15 minutes.
  1. In the vCenter Admin Console
    • Select and right-click your hznCCXX-01b connector
      • Where XX is your POD ID
      • Select Power > Power On

Section 2: Enable  SSH for the Worker Node on Site 1

Step 2: Enable  SSH for the Worker Node on Site 1
  1. Enable SSH for Newly Deployed Worker Node
    • From the vSphere Client in Site 1
      • Locate the newly deployed WorkerNode : HznCCxx-01b
        • Where xx is the attendee Identifier. In our example it is 22
        • From the Summary Tab
          • LAUNCH WEB CONSOLE
            • The Console Session Opens in a new tab
  1. In the Worker Node Connector Console (HznCC22-01b)
    • Login user: root
    • Password: VMware1!VMware1!
  1. In Cloud Connector Console, activate ccadmin
    • Type passwd ccadmin and press enter
    • New Password: VMware1!VMware1! and press enter
    • Retype new password: VMware1!VMware1! and press enter
  1. From Cloud Connector Console
    • Execute the following command
      • /opt/vmware/bin/configure-adapter.py --sshEnable

Section 3: Configure Worker Node on Site 1

Step 3: Configure Worker Node on Site 1

To Configure Worker node, we need to SSH on both Primary and Worker Nodes and run the commands to pair them together

  1. Login securely via SSH with CCADMIN on Primary Node
    • On ControlCenter Desktop
      • Right Click on Start Menu open Command Prompt (Admin)
      • Inside the Command Prompt
        • Type ssh [email protected] and press enter
          • When prompted Are you sure you want to continue connecting (yes/no)? Type yes
            • Once logged in to a ssh session of primary node, switch to root account
              • Type sudo -i
                • In the password prompt, Type VMware1!VMware1! and press enter
  1. Inside the SSH session of the Primary node
    • Type /opt/vmware/sbin/primary-cluster-config.sh -as 192.168.110.14
      • with your keyboard press enter
      • When  prompted for Are you sure you want to continue connecting (yes/no)? Type yes
        • When prompted for the Worker node (192.168.110.14 ) password
          • Type VMware1!VMware1! and press enter
            • After the command is executed, at the end of the ssh session, it will display a new command which needs to be executed on Worker node
              • Copy the Command and Save it on a notepad or notepad++ on the Control Center Desktop

 

  1. Login securely via SSH with CCADMIN on Worker Node
    • On ControlCenter Desktop
      • Right Click on Start Menu open Command Prompt (Admin)
      • Inside the Command Prompt
        • Type ssh [email protected] and press enter
          • When prompted Are you sure you want to continue connecting (yes/no)? Type yes
            • Once logged in to a ssh session of Worker node, switch to root account
              • Type Sudo -i
                • In the password prompt, Type VMware1!VMware1! and press enter
  1. Inside the SSH session of the Worker node
    • Copy and paste the command from the Step 2 which was saved on notepad or notepad ++
    • For example
      • /opt/vmware/sbin/worker-cluster-config.sh -a 192.168.110.12 jwdtsm.3xqqgeinm5kgc2kd b23a9a1059f507cac41cfecf1f3f3b536f41d93e68c9632f3363871fa255f38c
        • Once done, the Worker Node configuration is complete for Site1
  1. Inside the SSH session of the Worker node
    • type sudo -i
      • with your keyboard
        • select Enter
    • type kubectl get pods -A
      • with your keyboard
        • select Enter
    • type kubectl get nodes
      • with your keyboard
        • select Enter
  1. Switch to your Primary Node (192.168.110.12) to verify  the configuration
    • Run the following Commands
      • Type kubectl get nodes
        • with your keyboard
          • select enter
      • Type kubectl get pods -A  
        • with your keyboard
          • select enter

Part 3: Deploy  And Configure Worker Node on Site 2

Part 3: Deploy and Configure the Worker Node on Site 2
Step 1: Deploy Worker Node on Site 2
  1. From the ControlCenter Desktop
  1. In the vSphere Client
    • Expand vcenterXX-02a.euc-livefire.com > Region02A > Seattle
      • Right Click on esxi-02a.euc-livefire.com
        • Click Deploy OVF Template
  1. In the Deploy OVF Template Window
    • Under Select an OVF Template
      • Select Local File radio button
        • Click on UPLOAD FILES
          • From the upload ova window
            • Navigate to Desktop > Software > Horizon
              • Select horizon-cloud-connector-2.2.0.0-19666316_OVF10.ova
                • Click Open
          • Click Next
  1. In the Select a name and folder Window
    • Virtual machine name: HznCCxx-02b
      • Where xx is the attendee Identifier. In our example it is 22
    • In Select a location for the virtual machine Section
      • Select Region02A
      • Click Next
  1. In Select a compute resource Window
    • Expand by clicking Region02A > Seattle
      • Select esxi-02a.euc-livefire.com
      • Click Next
  1. In the Review details Window
    • Click Next
  1. In the License agreements Window
    • Click on I accept all license agreements check box
    • Click Next
  1. In Select storage Window
    • Select CorpLun02a
    • From Select virtual disk format dropdown
      • Select Thin Provision
      • Click Next
  1. In the Select networks Window
    • From Destination Network dropdown
      • Select VM Network
      • Click Next
  1. In the Customize template Window
    • Under Application Section which is denoted as 1
      • Root Password : VMware1!VMware1!
      • Confirm Password : VMware1!VMware1!
    • In the Worker Node area
      • Select the Checkbox
    • Worker Node : Check the tick box to mark this appliance as worker node.
    • Public key for ccadmin user  : Copy the content from id_rsa.pub by opening the file on Notepad or Notepad++
      • Note:
      • This is the public key we generated using command prompt in Part 1
      • Public key should will from  ssh-rsa
      • The location of the id_rsa.pub file
      • C:\Users\Administrator\.ssh
    • In the Proxy Section
      • Leave it default
      • Click on Proxy to collapse the Proxy section
      • Scroll Down to reach to Network Properties
    • In the Network Properties Section which is denoted as 2
      • Default Gateway : 192.168.210.1
      • Domain Name: euc-livefire.com
      • Domain Search Path: euc-livefire.com
      • Domain Name Servers: 192.168.210.10
      • Network IP Address: 192.168.210.14
      • Network Netmask: 255.255.255.0
      • Click Next
  1. In the Ready to Complete Window
    • Verify all the properties
    • Click Finish
      • Wait for the deployment of the OVA to complete.
      • This would take around 15 minutes
  1. In the vCenter Admin Console
    • Select and right-click your hznCCXX-02b connector
      • Where XX is your POD ID
      • Select Power > Power On

Section 2: Configure Worker Node on Site 2

Step 2: Enable SSH for Worker Node on Site 2
  1. Enable SSH for Newly Deployed Worker Node
    • From the vSphere Client in Site 2
      • Locate the newly deployed WorkerNode : HznCCxx-02b
        • Where xx is the attendee Identifier. In our example it is 22
        • From the Summary Tab
          • LAUNCH WEB CONSOLE
            • The Console Session Opens in a new tab
  1. In the Worker Node Connector Console (HznCC22-02b)
    • Login user: root
    • Password: VMware1!VMware1!
  1. In Cloud Connector Console, activate ccadmin
    • Type passwd ccadmin and press enter
    • New Password: VMware1!VMware1! and press enter
    • Retype new password: VMware1!VMware1! and press enter
  1. From Cloud Connector Console
    • Execute the following command
      • /opt/vmware/bin/configure-adapter.py --sshEnable

Part 3: Deploy Worker Node on Site 2

Step 3: Configure Worker Node on Site 2
  1. Login securely via SSH with CCADMIN on Primary Node
    • On Control Center Desktop
      • Right Click on Start Menu open Command Prompt (Admin)
      • Inside the Command Prompt
        • Type ssh [email protected] and press enter
          • When prompted Are you sure you want to continue connecting (yes/no)? Type yes
            • Once logged in to a ssh session of primary node, switch to root account
              • Type sudo -i
                • In the password prompt, Type VMware1!VMware1! and press enter
  1. Inside the SSH session of the Primary node
    • Type /opt/vmware/sbin/primary-cluster-config.sh -as 192.168.210.14 and press enter
      • When  prompted for Are you sure you want to continue connecting (yes/no)? Type yes
        • When prompted for the Worker node (192.168.210.14 ) password
          • Type VMware1!VMware1! and press enter
            • After the command is executed, at the end of the ssh session, it will display a new command which needs to be executed on Worker node
              • Copy the Command and Save it on a notepad or notepad++ on the Control Center Window

 

  1. Login securely via SSH with CCADMIN on Worker Node
    • On Control Center Desktop
      • Right Click on Start Menu open Command Prompt (Admin)
      • Inside the Command Prompt
        • Type ssh [email protected] and press enter
          • When prompted Are you sure you want to continue connecting (yes/no)? Type yes
            • Once logged in to a ssh session of Worker node, switch to root account
              • Type Sudo -i
                • In the password prompt, Type VMware1!VMware1! and press enter
  1. Inside the SSH session of the Worker node
    • Copy and paste the command from the Step 2 which was saved on notepad or notepad ++
    • For example
      • /opt/vmware/sbin/worker-cluster-config.sh -a 192.168.210.12 abyhf6.woazwk2jisbzovh5 b5382544c5645bf20ed21ab4250267cbd3f2e082e0b52495e14c6410e691e609
        • Once done, the Worker Node configuration is complete for Site2
  1. Inside the SSH session of the Worker node
    • type sudo -i
      • with your keyboard
        • select Enter
    • type kubectl get pods -A
      • with your keyboard
        • select Enter
    • type kubectl get nodes
      • with your keyboard
        • select Enter
  1. Switch to your Primary Node
    • Run the following Commands (192.168.210.12) on Site 2 to verify  the configuration
      • Type kubectl get nodes
        • With your keyboard
          • press enter
      • Type kubectl get pods -A  
        • With your keyboard
          • press enter

0 Comments

Add your comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.