Pas Apicella

Subscribe to Pas Apicella feed
Information on Pivotal Cloud Foundry (PAS/PKS/PFS) - Continuously deliver any app to every major private and public cloud with a single platformPas Apicellahttp://www.blogger.com/profile/09389663166398991762noreply@blogger.comBlogger431125
Updated: 19 hours 42 min ago

Integrating Cloud Foundry with Spinnaker

Wed, 2019-02-13 19:36
I previously blogged about "Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere" and then how to invoke UI using a "kubectl port-forward".

http://theblasfrompas.blogspot.com/2019/02/installing-spinnaker-on-pivotal.html
http://theblasfrompas.blogspot.com/2019/02/exposing-spinnaker-ui-endpoint-from.html

Steps

1. Exec into hal pod using a command as follows:

$ kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

Note: You can get the POD name as follows

papicella@papicella:~$ kubectl get pods | grep halyard
myspinnaker-spinnaker-halyard-0       1/1       Running     0          6d

2. Create a file settings-local.js in the directory ~/.hal/default/profiles/

window.spinnakerSettings.providers.cloudfoundry = {
  defaults: {account: 'my-cloudfoundry-account'}
};

3. Create a file clouddriver-local.yml with contents as follows. You can add multiple accounts but in this example I am just adding one

cloudfoundry:
  enabled: true
  accounts:
    - name: PWS
      user: papicella-pas@pivotal.io
      password: yyyyyyy
      api: api.run.pivotal.io

4. If you are working with an existing installation of Spinnaker, apply your changes:

spinnaker@myspinnaker-spinnaker-halyard-0:~/.hal/default/profiles$ hal deploy apply
+ Get current deployment
  Success
+ Prep deployment
  Success
Problems in halconfig:
- WARNING There is a newer version of Halyard available (1.15.0),
  please update when possible
? Run 'sudo apt-get update && sudo apt-get install
  spinnaker-halyard -y' to upgrade

+ Preparation complete... deploying Spinnaker
+ Get current deployment
  Success
+ Apply deployment
  Success
+ Run `hal deploy connect` to connect to Spinnaker.

5. Once this is done in the UI you will see any applications in your Organisations appear in this example it's a single application called "Spring" as shown below



6. In the example below when "Creating an Application" we can select the ORGS/Spaces we wish to use as shown below



More Information

Cloud Foundry Integration
https://www.spinnaker.io/setup/install/providers/cf/

Cloud Foundry Resource Mapping
https://www.spinnaker.io/reference/providers/cf/



Categories: Fusion Middleware

Exposing Spinnaker UI endpoint from a helm based spinnaker install on PKS with NSX-T

Thu, 2019-02-07 22:50
I previously blogged about "Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere" and then quickly invoking the UI using a "kubectl port-forward" as per this post.

http://theblasfrompas.blogspot.com/2019/02/installing-spinnaker-on-pivotal.html

That will work BUT but it won't get you too far so his what you would need to do so the UI works completely using the spin-gate API endpoint.

Steps (Once Spinnaker is Running)

1. Expose spin-deck and spin-gate to create external LB IP's. This is where NSX-T with PKS on prem is extremely useful as NSX-T has LB capability for your K8's cluster services you create making it as easier then using public cloud LB with Kubernetes.

$ kubectl expose service -n default spin-deck --type LoadBalancer --port 9000 --target-port 9000 --name spin-deck-public
service/spin-deck-public exposed

$ kubectl expose service -n default spin-gate --type LoadBalancer --port 8084 --target-port 8084 --name spin-gate-public
service/spin-gate-public exposed

2. That will create us two external IP's as shown below

$ kubectl get svc

...

NAME                 TYPE                 CLUSTER-IP     EXTERNAL-IP  PORT(S) AGE
spin-deck-public  LoadBalancer    10.100.200.200   10.195.44.1,100.64.128.15  9000:30131/TCP ..
spin-gate-public   LoadBalancer    10.100.200.5       10.195.44.2,100.64.128.15  8084:30312/TCP ..

...

3. Exec into hal pod using a command as follows

$ kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

4. Run these commands in order on the hal pod. Make sure you use the right IP address as per the output at #2 above. UI = spin-deck-public where API = spin-gate-public

$ hal config security ui edit --override-base-url http://10.195.44.1:9000
$ hal config security api edit --override-base-url http://10.195.44.2:8084
$ hal deploy apply

5. Port forward spin-gate on your localhost. Shouldn't really need to do this BUT for some reason it was required I suspect at some point this won't be required.

$ export GATE_POD=$(kubectl get pods --namespace default -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
$ echo $GATE_POD
$ kubectl port-forward --namespace default $GATE_POD 8084
spin-gate-85cc7465bd-v2q2l
Forwarding from 127.0.0.1:8084 -> 8084
Forwarding from [::1]:8084 -> 8084

6. Access UI using IP of spin-deck-public


If it worked you should see screen shots as follows showing that we can access the tabs and "Create Application" without errors accessing the gate API endpoint







Categories: Fusion Middleware

Spring Cloud GCP and authentication from your Spring Boot Application

Wed, 2019-02-06 17:18
When using Spring Cloud GCP you will need to authenticate at some point in order to use the GCP services. In this example below using a GCP Cloud SQL instance you really only need to do 3 things to access it externally from your Spring Boot application as follows.

1. Enable the Google Cloud SQL API which is detailed here

  https://cloud.google.com/sql/docs/mysql/admin-api/

2. Ensure that your GCP SDK can login to your Google Cloud SQL. This command will take you to a web page asking which google account you want to use

  $ gcloud auth application-default login

3. Finally some application properties in your Spring Boot application detailing the Google Cloud SQL instance name and database name as shown below.

spring.cloud.gcp.sql.instance-connection-name=fe-papicella:australia-southeast1:apples-db
spring.cloud.gcp.sql.database-name=employees

Now when you do that and your application starts up you will see a log message as follows below clearly warning you this this method of authentication can have implications at some point.

2019-02-07 09:10:26.700  WARN 2477 --- [           main] c.g.a.oauth2.DefaultCredentialsProvider  : Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/.

Clearly that's something we have to resolve. To do that we simply can add another Spring Boot application property pointing to a service account JSON file for us to authenticate against to remove the warning.

spring.cloud.gcp.credentials.location=file:/Users/papicella/piv-projects/GCP/fe-papicella-8077fe1126b2.json

Note: You can also use an ENV variable as follows

export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"

You can get a JSON key generated from the GCP console "IAM and Admin -> Service Accounts" page


For more information on authentication visit this link https://cloud.google.com/docs/authentication/getting-started



Categories: Fusion Middleware

Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere

Thu, 2019-01-31 19:47
I decided to install spinnaker on my vSphere PKS installation into one of my clusters. Here is how I did this step by step

1. You will need PKS installed which I have on vSphere with PKS 1.2 using NSX-T. Here is a screen shot of that showing Ops Manager UI


Make sure your PKS Plans have these check boxes enabled, without these checked spinnaker will not install using the HELM chart we will be using below


2. In my setup I created a DataStore which will be used by my K8's cluster, this is optional you can setup PVC however you see fit.



3. Now it's assumed you have a K8s cluster which I have as shown below. I used the PKS CLI to create a small cluster of 1 master node and 3 worker nodes

$ pks cluster lemons

Name:                     lemons
Plan Name:                small
UUID:                     19318553-472d-4bb5-9783-425ce5626149
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   lemons.haas-65.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  10.y.y.y
Network Profile Name:

4. Create a Storage Class as follows, notice how we reference our vSphere Data Store named "k8s" as per step 2

$ kubectl create -f storage-class-vsphere.yaml

Note: storage-class-vsphere.yaml defined as follows

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
  datastore: k8s
  diskformat: thin
  fstype: ext3

5. Set this Storage Class as the default

$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Verify

papicella@papicella:~$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   14h

6. Install helm as shown below

$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ sleep 10
$ helm ls

Note: rbac-config.yaml defined as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

7. Install spinnaker into your K8's cluster as follows

$ helm install --name myspinnaker stable/spinnaker --timeout 6000 --debug

If everything worked

papicella@papicella:~$ kubectl get pods
NAME                                  READY     STATUS      RESTARTS   AGE
myspinnaker-install-using-hal-gbd96   0/1       Completed   0          14m
myspinnaker-minio-5d4c999f8b-ttm7f    1/1       Running     0          14m
myspinnaker-redis-master-0            1/1       Running     0          14m
myspinnaker-spinnaker-halyard-0       1/1       Running     0          14m
spin-clouddriver-7b8cd6f964-ksksl     1/1       Running     0          12m
spin-deck-749c84fd77-j2t4h            1/1       Running     0          12m
spin-echo-5b9fd6f9fd-k62kd            1/1       Running     0          12m
spin-front50-6bfffdbbf8-v4cr4         1/1       Running     1          12m
spin-gate-6c4959fc85-lj52h            1/1       Running     0          12m
spin-igor-5f6756d8d7-zrbkw            1/1       Running     0          12m
spin-orca-5dcb7d79f7-v7cds            1/1       Running     0          12m
spin-rosco-7cb8bd4849-c44wg           1/1       Running     0          12m

8. At the end of the HELM command once complete you will see output as follows

1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace default $DECK_POD 9000

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

9. Go ahead and run these commands to connect using your localhost to the spinnaker UI

$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
$ kubectl port-forward --namespace default $DECK_POD 9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

10. Browse to http://127.0.0.1:9000



More Information

Spinnaker
https://www.spinnaker.io/

Pivotal Container Service
https://pivotal.io/platform/pivotal-container-service


Categories: Fusion Middleware

Testing out the new PFS (Pivotal Function Service) alpha release on minikube

Mon, 2019-01-21 19:25
I quickly installed PFS on minikube as per the instructions below so I could write my own function service. Below shows that function service and how I invoked using the PFS CLI and Postman

1. Install PFS using this url for minikube. Refer to these instructions to install PFS on minikube

https://docs.pivotal.io/pfs/install-on-minikube.html

2. Once installed verify PFS has been installed using some commands as follows

$ watch -n 1 kubectl get pod --all-namespaces

Output:



Various namespaces are created as shown below:

$ kubectl get namespaces
NAME                   STATUS    AGE
default                    Active       19h
istio-system           Active       18h
knative-build        Active       18h
knative-eventing  Active       18h
knative-serving    Active       18h
kube-public            Active       19h
kube-system          Active        19h

Ensure PFS is installed as shown below:

$ pfs version
Version
  pfs cli: 0.1.0 (e5de84d12d10a060aeb595310decbe7409467c99)

3. Now we are going to deploy this employee function which exists on GitHub as follows

https://github.com/papicella/emp-function-service


The Function code is as follows:

  
package com.example.empfunctionservice;

import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;

import java.util.function.Function;

@Slf4j
@SpringBootApplication
public class EmpFunctionServiceApplication {

private static EmployeeService employeeService;

public EmpFunctionServiceApplication(EmployeeService employeeService) {
this.employeeService = employeeService;
}

@Bean
public Function<String, String> findEmployee() {
return id -> {
String response = employeeService.getEmployee(id);

return response;
};
}

public static void main(String[] args) {
SpringApplication.run(EmpFunctionServiceApplication.class, args);
}

}

4. We are going to deploy a Spring Boot Function as per the REPO above. More information on Java Functions for PFS can be found here

https://docs.pivotal.io/pfs/using-java-functions.html


5. Let's create a function called "emp-function" as shown below

$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v

Output: (Just showing the last few lines here)

papicella@papicella:~/pivotal/software/minikube$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v
Waiting for LatestCreatedRevisionName
Waiting on function creation: checkService failed to obtain service status for observedGeneration 1
LatestCreatedRevisionName available: emp-function-00001

...

default/emp-function-00001-gpn7p[build-step-build]: [INFO] BUILD SUCCESS
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Total time: 12.407 s
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Finished at: 2019-01-22T00:12:39Z
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]:        Removing source code
default/emp-function-00001-gpn7p[build-step-build]:
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Buildpack 0.1.0
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Java Invoker 0.1.3: Contributing to launch
default/emp-function-00001-gpn7p[build-step-build]:        Reusing cached download from buildpack
default/emp-function-00001-gpn7p[build-step-build]:        Copying to /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar
default/emp-function-00001-gpn7p[build-step-build]: -----> Process types:
default/emp-function-00001-gpn7p[build-step-build]:        web:      java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'
default/emp-function-00001-gpn7p[build-step-build]:        function: java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'
default/emp-function-00001-gpn7p[build-step-build]:

...

default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.617  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=1, name=pas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.623  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=2, name=lucia)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.628  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=3, name=lucas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.632  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=4, name=siena)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.704  INFO 1 --- [       Thread-2] o.s.c.f.d.FunctionCreatorConfiguration   : Located bean: findEmployee of type class com.example.empfunctionservice.EmpFunctionServiceApplication$$Lambda$791/373359604

pfs function create completed successfully

6. Let's invoke our function as shown below by returning each Employee record using it's ID.

$ pfs service invoke emp-function --text -- -w '\n' -d '1'
curl http://192.168.64.3:32380/ -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 1
Employee(id=1, name=pas)

$ pfs service invoke emp-function --text -- -w '\n' -d '2'
curl http://192.168.64.3:32380/ -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 2
Employee(id=2, name=lucia)

The "pfs service invoke" will show you what an external command will look like to invoke the function service. The IP address here is just the same IP address returned by "minikube ip" as shown below.

$ minikube ip
192.168.64.3

7. Let's view our services using "pfs" CLI

$ pfs service list
NAME            STATUS
emp-function  Running
hello                Running

pfs service list completed successfully

8. Invoking from Postman, ensuring we issue a POST request and pass the correct headers as shown below





More Information

https://docs.pivotal.io/pfs/index.html

Categories: Fusion Middleware

Creating a local kubectl config file for the proxy to your Kubernetes API server

Fri, 2019-01-04 03:04
On my Mac accessing the CONFIG file of kubectl exist in a painful location as follows

  $HOME/.kube/config

When using the command "kubectl proxy" and invoking the UI requires you to browse to the CONFIG file which finder doesn't expose easily. One way around this is as follows

1. Save a copy of that config file in your current directory as follows

papicella@papicella:~/temp$ cat ~/.kube/config > kubeconfig

2. Invoke "kubectl proxy" to start a UI server to your K8's cluster

papicella@papicella:~/temp$ kubectl proxy
Starting to serve on 127.0.0.1:8001

3. Navigate to the UI using an URL as follows

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview



4. At this point we can browse to the easily accessible TEMP directory to the file "kubeconfig" we created at step #1 and then click "Sing In" button


Categories: Fusion Middleware

PCF Heathwatch 1.4 just got a new UI

Thu, 2018-12-13 04:45
PCF Healthwatch is a service for monitoring and alerting on the current health, performance, and capacity of PCF to help operators understand the operational state of their PCF deployment

Finally got around to installing the new PCF Healthwatch 1.4 and the first thing which struck me was the UI main dashboard page. It's clear what I need to look at in seconds and the alerts on the right hand side also useful

Some screen shots below





papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-99$ http http://healthwatch.run.haas-99.pez.pivotal.io/info
HTTP/1.1 200 OK
Content-Length: 45
Content-Type: application/json;charset=utf-8
Date: Thu, 13 Dec 2018 10:26:07 GMT
X-Vcap-Request-Id: ac309698-74a6-4e94-429a-bb5673c1c8f7

{
    "message": "PCF Healthwatch available"
}

More Information

Pivotal Cloud Foundry Healthwatch
https://docs.pivotal.io/pcf-healthwatch/1-4/index.html

Categories: Fusion Middleware

Disabling Spring Security if you don't require it

Sun, 2018-12-09 17:42
When using Spring Cloud Services Starter Config Client dependency for example Spring Security will also be included (Config servers will be protected by OAuth2). As a result this will also enable basic authentication to all our service endpoints on your application which may not be the desired result here if your just building a demo for example

Add the following to conditionally disable security in your Spring Boot main class
  
package com.example.employeeservice;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication {

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@Configuration
static class ApplicationSecurity extends WebSecurityConfigurerAdapter {

@Override
public void configure(WebSecurity web) throws Exception {
web
.ignoring()
.antMatchers("/**");
}
}
}
Categories: Fusion Middleware

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

Spring Cloud GCP using Spring Data JPA with MySQL 2nd Gen 5.7

Sun, 2018-10-07 19:06
Spring Cloud GCP adds integrations with Spring JDBC so you can run your MySQL or PostgreSQL databases in Google Cloud SQL using Spring JDBC, or other libraries that depend on it like Spring Data JPA. Here is an example of how using Spring Data JPA with "Spring Cloud GCP"

1. First we need a MySQL 2nd Gen 5.7 database to exist in our GCP account which I have previously created as shown below




2. Create a new project using Spring Initializer or how ever you like to create it BUT ensure you have the following dependencies in place. Here is an example of what my pom.xml looks like. In short add the following maven dependencies as per the image below



pom.xml

  
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.5.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud-gcp.version>1.0.0.RELEASE</spring-cloud-gcp.version>
<spring-cloud.version>Finchley.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>


...

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-dependencies</artifactId>
<version>${spring-cloud-gcp.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

3. Let's start by creating a basic Employee entity as shown below

Employee.java
  
package pas.apj.pa.sb.gcp;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
@Table (name = "employee")
public class Employee {

@Id
@GeneratedValue (strategy = GenerationType.AUTO)
private Long id;

private String name;

}

4. Let's now add a Rest JpaRepository for our Entity

EmployeeRepository.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository <Employee, Long> {
}
5. Let's create a basic RestController to show all our Employee entities

EmployeeRest.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
public class EmployeeRest {

private EmployeeRepository employeeRepository;

public EmployeeRest(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@RequestMapping("/emps-rest")
public List<Employee> getAllemps()
{
return employeeRepository.findAll();
}
}

6. Let's create an ApplicationRunner to show our list of Employees as the applications starts up

EmployeeRunner.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.stereotype.Component;

@Component
public class EmployeeRunner implements ApplicationRunner {

private EmployeeRepository employeeRepository;

public EmployeeRunner(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@Override
public void run(ApplicationArguments args) throws Exception {
employeeRepository.findAll().forEach(System.out::println);
}
}
7. Add a data.sql file to create some records in the database at application startup

data.sql

insert into employee (name) values ('pas');
insert into employee (name) values ('lucia');
insert into employee (name) values ('lucas');
insert into employee (name) values ('siena');

8. Finally our "application.yml" file will need to be able to be able to connect to our MySQL instance running in GCP as well as set some properties for JPA as shown below

spring:
  jpa:
    hibernate:
      ddl-auto: create-drop
      use-new-id-generator-mappings: false
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MariaDB53Dialect
  cloud:
    gcp:
      sql:
        instance-connection-name: fe-papicella:australia-southeast1:apples-mysql-1
        database-name: employees
  datasource:
    initialization-mode: always
    hikari:
      maximum-pool-size: 1


A couple of things in here which are important.

- Set the Hibernate property "dialect: org.hibernate.dialect.MariaDB53Dialect" otherwise without this when hibernate creates tables for your entities you will  run into this error as Cloud SQL database tables are created using the InnoDB storage engine.

ERROR 3161 (HY000): Storage engine MyISAM is disabled (Table creation is disallowed).

- For a demo I don't need multiple DB connections so I set the datasource "maximum-pool-size" to 1

- Notice how I set the "instance-connection-name" and "database-name" which is vital for Spring Cloud SQL to establish database connections

8. Now we need to make sure we have a database called "employees" as per our "application.yml" setting.


9. Now let's run our Spring Boot Application and verify this working showing some output from the logs

- Connection being established

2018-10-08 10:54:37.333  INFO 89922 --- [           main] c.google.cloud.sql.mysql.SocketFactory   : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] via ssl socket.
2018-10-08 10:54:37.335  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : First Cloud SQL connection, generating RSA key pair.
2018-10-08 10:54:38.685  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Obtaining ephemeral certificate for Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1].
2018-10-08 10:54:40.132  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] on IP [35.197.180.223].
2018-10-08 10:54:40.748  INFO 89922 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

- Showing the 4 Employee records

Employee(id=1, name=pas)
Employee(id=2, name=lucia)
Employee(id=3, name=lucas)
Employee(id=4, name=siena)

10. Finally let's make RESTful call as we defined above using HTTPie as follows

pasapicella@pas-macbook:~$ http :8080/emps-rest
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 08 Oct 2018 00:01:42 GMT
Transfer-Encoding: chunked

[
    {
        "id": 1,
        "name": "pas"
    },
    {
        "id": 2,
        "name": "lucia"
    },
    {
        "id": 3,
        "name": "lucas"
    },
    {
        "id": 4,
        "name": "siena"
    }
]

More Information

Spring Cloud GCP
https://cloud.spring.io/spring-cloud-gcp/

Spring Cloud GCP SQL demo (This one is using Spring JDBC)
https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-sql-sample

Categories: Fusion Middleware

PKS - What happens when we create a new namespace with NSX-T

Mon, 2018-09-17 07:02
I previously blogged about the integration between PKS and NSX-T on this post

http://theblasfrompas.blogspot.com/2018/09/pivotal-container-service-pks-with-nsx.html

On this post lets show the impact of what occurs within NSX-T when we create a new Namespace in our K8s cluster.

1. List the K8s clusters with have available

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks clusters

Name    Plan Name  UUID                                  Status     Action
apples  small      d9f258e3-247c-4b4c-9055-629871be896c  succeeded  UPDATE

2. Fetch the cluster config for our cluster into our local Kubectl config

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks get-credentials apples

Fetching credentials for cluster apples.
Context set for cluster apples.

You can now switch between clusters by using:
$kubectl config use-context

3. Create a new Namespace for the K8s cluster as shown below

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl create namespace production
namespace "production" created

4. View the Namespaces in the K8s cluster

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl get ns
NAME          STATUS    AGE
default       Active    12d
kube-public   Active    12d
kube-system   Active    12d
production    Active    9s

Using NSX-T manager the first thing you will see is a new Tier 1 router created for the K8s namespace "production"



Lets view it's configuration via the "Overview" screen


Finally lets see the default "Logical Routes" as shown below



When we push workloads to the "Production" namespace it's this configuration which was dynamically created which we will get out of the box allowing us to expose a "LoadBalancer" service as required across the Pods deployed within the Namspace

Categories: Fusion Middleware

Pivotal Container Service (PKS) with NSX-T on vSphere

Wed, 2018-09-05 06:15
It taken some time but now I officially was able to test PKS with NSX-T rather then using Flannel.

While there is a bit of initial setup to install NSX-T and PKS and then ensure PKS networking is NSX-T, the ease of rolling out multiple Kubernetes clusters with unique networking is greatly simplified by NSX-T. Here I am going to show what happens after pushing a workload to my PKS K8s cluster

First Before we can do anything we need the following...

Pre Steps

1. Ensure you have NSX-T setup and a dashboard UI as follows


2. Ensure you have PKS installed in this example I have it installed on vSphere which at the time of this blog is the only supported / applicable version we can use for NSX-T



PKS tile would need to ensure it's setup to use NSX-T which is done on this page of the tile configuration



3. You can see from the NSX-T manager UI we have a Load Balancers setup as shown below. Navigate to "Load Balancing -> Load Balancers"



And this Load Balancer is backed by few "Virtual Servers", one for http (port 80) and the other for https (port 443), which can be seen when you select the Virtual Servers link


From here we have logical switches created for each of the Kubernetes namespaces. We see two for our load balancer, and the other 3 are for the 3 K8s namespaces which are (default, kube-public, kube-system)


Here is how we verify the namespaces we have in our K8s cluster

pasapicella@pas-macbook:~/pivotal $ kubectl get ns
NAME          STATUS    AGE
default       Active    5h
kube-public   Active    5h
kube-system   Active    5h

All of the logical switches are connected to the T0 Logical Switch by a set of T1 Logical Routers


For these to be accessible, they are linked to the T0 Logical Router via a set of router ports



Now lets push a basic K8s workload and see what NSX-T and PKS give us out of the box...

Steps

Lets create our K8s cluster using the PKS CLI. You will need a PKS CLI user which can be created following this doc

https://docs.pivotal.io/runtimes/pks/1-1/manage-users.html

1. Login using the PKS CLI as follows

$ pks login -k -a api.pks.haas-148.pez.pivotal.io -u pas -p ****

2. Create a cluster as shown below

$ pks create-cluster apples --external-hostname apples.haas-148.pez.pivotal.io --plan small

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress

3. Wait for the cluster to have created as follows

$ pks cluster apples

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  10.1.1.10

The PKS CLI is basically telling BOSH to go ahead an based on the small plan create me a fully functional/working K8's cluster from VM's to all the processes that go along with it and when it's up keep it up and running for me in the event of failure.

His an example of the one of the WORKER VM's of the cluster shown in vSphere Web Client



4. Using the following YAML file as follows lets push that workload to our K8s cluster

apiVersion: v1
kind: Service
metadata:
  labels:
    app: fortune-service
    deployment: pks-workshop
  name: fortune-service
spec:
  ports:
  - port: 80
    name: ui
  - port: 9080
    name: backend
  - port: 6379
    name: redis
  type: LoadBalancer
  selector:
    app: fortune
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: fortune
    deployment: pks-workshop
  name: fortune
spec:
  containers:
  - image: azwickey/fortune-ui:latest
    name: fortune-ui
    ports:
    - containerPort: 80
      protocol: TCP
  - image: azwickey/fortune-backend-jee:latest
    name: fortune-backend
    ports:
    - containerPort: 9080
      protocol: TCP
  - image: redis
    name: redis
    ports:
    - containerPort: 6379
      protocol: TCP

5. Push the workload as follows once the above YAML is saved to a file

$ kubectl create -f fortune-teller.yml
service "fortune-service" created
pod "fortune" created

6. Verify the PODS are running as follows

$ kubectl get all
NAME         READY     STATUS    RESTARTS   AGE
po/fortune   3/3       Running   0          35s

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                      AGE
svc/fortune-service   LoadBalancer   10.100.200.232   10.195.3.134   80:30591/TCP,9080:32487/TCP,6379:32360/TCP   36s
svc/kubernetes        ClusterIP      10.100.200.1              443/TCP                                      5h

Great so now lets head back to our NSX-T manager UI and see what has been created. From the above output you can see a LB service is created and external IP address assigned

7. First thing you will notice is in "Virtual Servers" we have some new entries for each of our containers as shown below


and ...


Finally the LB we previously had in place shows our "Virtual Servers" added to it's config and routable



More Information

Pivotal Container Service
https://docs.pivotal.io/runtimes/pks/1-1/

VMware NSX-T
https://docs.vmware.com/en/VMware-NSX-T/index.html
Categories: Fusion Middleware

PCF Platform Automation with Concourse (PCF Pipelines)

Mon, 2018-08-20 03:28
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

... and from there setting up Concourse

http://theblasfrompas.blogspot.com/2018/08/deploying-concourse-using-my-bubble.html

.. of course this was created so I can now use the PCF Pipelines to deploy Pivotal Cloud Foundry's Pivotal Application Service (PAS). At a high level this is how to achieve this with some screen shots on the end result

Steps

1. To get started you would use this link as follows. In my example I was deploying PCF to AWS

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf

AWS Install Pipeline

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf/aws

2. Create a versioned bucket for holding terraform state. on AWS that will look as follows


3. Unless you ensure AWS pre-reqs are meet you won't be able to install PCF so this link highlights all that you will need for installing PCF on AWS such as key pairs, limits, etc

https://docs.pivotal.io/pivotalcf/2-1/customizing/aws.html

4. Create a public DNS zone, get its zone ID we will need that when we setup the pipeline shortly. I also created a self signed public certificate used for my DNS as part of the setup which is required as well.





5. At this point we can download the PCF Pipelines from network.pivotal.io or you can use the link as follows

https://network.pivotal.io/products/pcf-automation/



6. Once you have unzipped the file you would then change to the directory for the write IaaS in my case "aws"

$ cd pcf-pipelines/install-pcf/aws


7. Change all of the CHANGEME values in params.yml with real values for your AWS env. This file is documented so you are clear with what you need to add and where. Most of the values are defaults of course.

8. Login to concourse using the "fly" command line

$ fly --target pcfconcourse login  --concourse-url https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com -k

9. Add pipeline

$ fly -t pcfconcourse set-pipeline -p deploy-pcf -c pipeline.yml -l params.yml

10. Unpause pipeline

$ fly -t pcfconcourse unpause-pipeline -p deploy-pcf

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines/pcf-pipelines/install-pcf/aws$ fly -t pcfconcourse pipelines
name        paused  public
deploy-pcf  no      no

11. The pipeline on concourse will look as follows



12. Now to execute the pipeline you have to manually run 2 tasks

- Run bootstrap-terraform-state job manually




- Run create-infrastructure manually
 


At this point the pipeline will kick of automatically. If you need to run-run due to an issue you can manually kick off the task after you fix what you need to fix. The “wipe-env” task will take everything for PAS down and terraform removes all IaaS config as well.

While running each task current state is shown as per the image below


If successful your AWS account will the PCF VM's created for example


Verify that PCF installed is best done using Pivotal Operations Manager as shown below



More Information

https://network.pivotal.io/products/pcf-automation/


Categories: Fusion Middleware

Deploying concourse using my "Bubble" created Bosh director

Fri, 2018-08-17 23:27
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

Now with bosh director deployed it's time to deploy concourse itself. The process is very straight forward as per the steps below

1. First let's clone the bosh concourse deployment using the GitHub project as follows



2.  Target bosh director and login, must set ENV variables to connect to AWS bosh correctly using "eval" as we did in the previous post. This will set all the ENV variables we need

$ eval "$(bbl print-env -s state)"
$ bosh alias-env aws-env
$ bosh -e aws-env log-in

3. At this point we need to set the external URL which is essentially the load balancer we created when we deployed Bosh Director in the previous post. To get that value run a command as follows where we deployed bosh director from as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bbl lbs -s state
Concourse LB: bosh-director-aws-concourse-lb [bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com]

4. Now lets set that ENV variable as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ export external_url=https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com

5. Now from the cloned bosh concourse directory change to the directory "concourse-bosh-deployment/cluster" as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ cd concourse-bosh-deployment/cluster

6. Upload stemcell as follows

$ bosh upload-stemcell light-bosh-stemcell-3363.69-aws-xen-hvm-ubuntu-trusty-go_agent.tgz

Verify:

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-bosh stemcells
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name                                     Version  OS             CPI  CID
bosh-aws-xen-hvm-ubuntu-trusty-go_agent  3363.69  ubuntu-trusty  -    ami-0812e8018333d59a6

(*) Currently deployed

1 stemcells

Succeeded
 
7. Now lets deploy concourse as shown below with a command as follows. Make sure you set a password as per "atc_basic_auth.password"

$ bosh deploy -d concourse concourse.yml   -l ../versions.yml   --vars-store cluster-creds.yml   -o operations/basic-auth.yml   -o operations/privileged-http.yml   -o operations/privileged-https.yml   -o operations/tls.yml   -o operations/tls-vars.yml   -o operations/web-network-extension.yml   --var network_name=default   --var external_url=$external_url   --var web_vm_type=default   --var db_vm_type=default   --var db_persistent_disk_type=10GB   --var worker_vm_type=default   --var deployment_name=concourse   --var web_network_name=private   --var web_network_vm_extension=lb  --var atc_basic_auth.username=admin --var atc_basic_auth.password=..... --var worker_ephemeral_disk=500GB_ephemeral_disk -o operations/worker-ephemeral-disk.yml 

8. Once deployed verify the deployment and VM's created as follows

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env deployments
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name       Release(s)          Stemcell(s)                                      Team(s)
concourse  concourse/3.13.0    bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3363.69  -
           garden-runc/1.13.1
           postgres/28

1 deployments

Succeeded
pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 32. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/db78de7f-55c5-42f5-bf9d-20b4ef0fd331      running        z1  10.0.16.5  i-04904fbdd1c7e829f  default  true
web/767b14c8-8fd3-46f0-b74f-0dca2c3b9572     running        z1  10.0.16.4  i-0e5f1275f635bd49d  default  true
worker/cde3ae19-5dbc-4c39-854d-842bbbfbe5cd  running        z1  10.0.16.6  i-0bd44407ec0bd1d8a  default  true

3 vms

Succeeded

9. Navigate to the LB url we used above to access concourse UI using the username/password you set as per the deployment

https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com/


10. Finally we can see of Bosh Director and Concourse deployment VM's on our AWS instance EC2 page as follows



More Information

Categories: Fusion Middleware

bosh-bootloader or "Bubble" as pronounced and how to get started

Wed, 2018-08-15 06:50
I decided to try out installing bosh using the bosh-bootloader CLI today. bbl currently supports AWS, GCP, Microsoft Azure, Openstack and vSphere. In this example I started with AWS but it won't be long until try this on GCP

It's worth noting that this can all be done remotely from your laptop once you give BBL the access it needs for the cloud environment.

Steps

1. First your going to need the bosh v2 CLI which you can install here

  https://bosh.io/docs/cli-v2/

Verify:

pasapicella@pas-macbook:~$ bosh -version
version 5.0.1-2432e5e9-2018-07-18T21:41:03Z

Succeeded

2. Second you will need Terrform having a Mac I use brew

$ brew install terrafrom

Verify:

pasapicella@pas-macbook:~$ terraform version
Terraform v0.11.7

3. Now we need to install BBL which is done as follows on a Mac. I also show how to install bosh CLI as well if you missed step 1

$ brew tap cloudfoundry/tap
$ brew install bosh-cli
$ brew install bbl

Further instructions on this link

https://github.com/cloudfoundry/bosh-bootloader

4. At this point your ready to deploy BOSH the instructions for AWS are here

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

Pretty straight forward but here is what I did at this point

5. In order for bbl to interact with AWS, an IAM user must be created. This user will be issuing API requests to create the infrastructure such as EC2 instances, load balancers, subnets, etc.

The user must have the following policy which I just copy into my clipboard to use later:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:*",
                "elasticloadbalancing:*",
                "cloudformation:*",
                "iam:*",
                "kms:*",
                "route53:*",
                "ec2:*"
            ],
            "Resource": "*"
        }
    ]
}


$ aws iam create-user --user-name "bbl-user”

This next command requires you to copy the policy JSON above

$ aws iam put-user-policy --user-name "bbl-user" --policy-name "bbl-policy" --policy-document "$(pbpaste)"

$ aws iam create-access-key --user-name "bbl-user"

You will get a JSON response at this point as follows. Save file created here as it’s used next few steps

{
    "AccessKey": {
        "UserName": "bbl-user",
        "Status": "Active",
        "CreateDate": "2018-08-07T03:30:39.993Z",
        "SecretAccessKey": ".....",
        "AccessKeyId": "........"
    }
}

In the next step BBL will use these commands to create infrastructure on AWS.

6. Now we can pave the infrastructure, Create a Jumpbox, and Create a BOSH Director as well as a LB which I need as I plan to deploy concourse using BOSH.

$ bbl up --aws-access-key-id ..... --aws-secret-access-key ... --aws-region ap-southeast-2 --lb-type concourse --name bosh-director -d -s state --iaas aws

The process takes around 5-8 minutes.

The bbl state directory contains all of the files that were used to create your bosh director. This should be checked in to version control, so that you have all the information necessary to later destroy or update this environment at a later date.

7.  Finally we target the the bosh director as follows. Keep in mind everything we need is stored in the "state" directory as per above

$ eval "$(bbl print-env -s state)"

8. This will set various ENV variables which the bosh CLI will then use to target the bosh director.  Now we need to just prepare ourselves to actually log in. I use a script as follows

target-bosh.sh

bbl director-ca-cert -s state > bosh.crt
export BOSH_CA_CERT=bosh.crt

export BOSH_ENVIRONMENT=$(bbl director-address -s state)

echo ""
echo "Username: $(bbl director-username -s state)"
echo "Password: $(bbl director-password -s state)"
echo ""
echo "Log in using -> bosh log-in"
echo ""

bosh alias-env aws-env

echo "ENV set to -> aws-env"
echo ""

Output When run with password omitted ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ ./target-bosh.sh

Username: admin
Password: ......

Log in using -> bosh log-in

Using environment 'https://10.0.0.6:25555' as client 'admin'

Name      bosh-bosh-director-aws
UUID      3ade0d28-77e6-4b5b-9be7-323a813ac87c
Version   266.4.0 (00000000)
CPI       aws_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      admin

Succeeded
ENV set to -> aws-env

9. Finally lets log-in as follows

$ bosh -e aws-env log-in

Output ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env log-in
Successfully authenticated with UAA

Succeeded

10. Last but not least lets see what VM's bosh has under management. These VM's are for my concourse I installed. If you would like to install concourse use this link - https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/concourse.md

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 20. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/ec8aa978-1ec5-4402-9835-9a1cbce9c1e5      running        z1  10.0.16.5  i-0d33949ece572beeb  default  true
web/686546be-09d1-43ec-bbb7-d96bb5edc3df     running        z1  10.0.16.4  i-03af52f574399af28  default  true
worker/679be815-6250-477c-899c-b962076f26f5  running        z1  10.0.16.6  i-0efac99165e12f2e6  default  true

3 vms

Succeeded

More Information

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/howto-target-bosh-director.md


Categories: Fusion Middleware

Using CFDOT (CF Diego Operator Toolkit) on Pivotal Cloud Foundry

Tue, 2018-06-19 22:12
I decided to use CFDOT (CF Diego Operator Toolkit) on my PCF 2.1 vSphere ENV today. Setting it up isn't required as it's installed out of the box on Bosh Managed Diego Cell as shown below. It gives nice detailed information around Cell Capacity and other useful metrics.

1. SSH into Ops Manager VM

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-165$ ssh ubuntu@opsmgr.haas-165.mydns.com
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
ubuntu@opsmgr.haas-165.mydns.com's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-124-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

...

ubuntu@bosh-stemcell:~$

At this point you will need to log into the Bosh Director as described below


2. Issue a command as follows once logged in to get all VM's. We just need a name of one of the Diego CELL VM's

ubuntu@bosh-stemcell:~$ bosh -e vmware vms --column=Instance --column="Process State"
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Task 12086
Task 12087
Task 12086 done

Task 12087 done

Deployment 'cf-edc48fe108f1e5581fba'

Instance                                                            Process State
backup-prepare/eff97a4b-15a2-425c-8333-1dbaaefbb5ff                 running
clock_global/d77c485f-7d7c-43ae-b9de-584411ffa0bd                   running
cloud_controller/874dd06c-b76e-427a-943e-dea66f0345b6               running
cloud_controller/bba1819e-b7f4-4a34-897a-c78f6189667c               running
cloud_controller_worker/803bfb3f-653b-4311-b831-9b76e602714e        running
cloud_controller_worker/f5956edb-9510-4d99-a0f7-8545831b45ec        running
consul_server/3bfdc6bd-2f1d-4607-8564-148fadd4bc3d                  running
consul_server/4927cc4b-4531-429b-b379-83e283b779ba                  running
consul_server/69c1c5ee-8288-49bd-9112-afe05fe536f4                  running
diego_brain/01d3914c-2ab1-4b75-ada7-2267f34faee6                    running
diego_brain/564cf558-c2dc-4045-a4d1-54f633633dd6                    running
diego_brain/a22c2621-4278-4a83-94ee-34287deb9310                    running
diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf                     running
diego_cell/9452a3b4-d40c-49f1-9dbf-8d74202f7dff                     running
diego_cell/dfc8e214-2e59-4050-9312-1113662ce79f                     running

...

3. SSH into a Bosh managed Diego Cell VM. Use the correct name for one of your Diego Cells and your deployment name for CF itself

ubuntu@bosh-stemcell:~$ bosh -e vmware -d cf-edc48fe108f1e5581fba ssh diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Using deployment 'cf-edc48fe108f1e5581fba'

....

4. Run a command as follows "sudo su -"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~$ sudo su -

5. Verify CFDOT CLI is installed using "cfdot"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot
A command-line tool to interact with a Cloud Foundry Diego deployment

Usage:
  cfdot [command]

Available Commands:
  actual-lrp-groups            List actual LRP groups
  actual-lrp-groups-for-guid   List actual LRP groups for a process guid
  cancel-task                  Cancel task
  cell                         Show the specified cell presence
  cell-state                   Show the specified cell state
  cell-states                  Show cell states for all cells
  cells                        List registered cell presences
  claim-lock                   Claim Locket lock
  claim-presence               Claim Locket presence
  create-desired-lrp           Create a desired LRP
  create-task                  Create a Task
  delete-desired-lrp           Delete a desired LRP
  delete-task                  Delete a Task
  desired-lrp                  Show the specified desired LRP
  desired-lrp-scheduling-infos List desired LRP scheduling infos
  desired-lrps                 List desired LRPs
  domains                      List domains
  help                         Get help on [command]
  locks                        List Locket locks
  lrp-events                   Subscribe to BBS LRP events
  presences                    List Locket presences
  release-lock                 Release Locket lock
  retire-actual-lrp            Retire actual LRP by index and process guid
  set-domain                   Set domain
  task                         Display task
  task-events                  Subscribe to BBS Task events
  tasks                        List tasks in BBS
  update-desired-lrp           Update a desired LRP

Flags:
  -h, --help   help for cfdot

Use "cfdot [command] --help" for more information about a command.

6. Lets see what each Diego CELL has for Capacity as a whole

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cells | jq -r
{
  "cell_id": "7ca12f7d-737f-47fb-a8bc-91d73e4791cf",
  "rep_address": "http://10.193.229.62:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://7ca12f7d-737f-47fb-a8bc-91d73e4791cf.cell.service.cf.internal:1801"
}
{
  "cell_id": "9452a3b4-d40c-49f1-9dbf-8d74202f7dff",
  "rep_address": "http://10.193.229.61:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://9452a3b4-d40c-49f1-9dbf-8d74202f7dff.cell.service.cf.internal:1801"
}
{
  "cell_id": "dfc8e214-2e59-4050-9312-1113662ce79f",
  "rep_address": "http://10.193.229.63:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://dfc8e214-2e59-4050-9312-1113662ce79f.cell.service.cf.internal:1801"
}

7. Finally lets see what available resources we have on each Diego Cell

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cell-states | jq '"Cell Id -> \(.cell_id): L -> \(.LRPs | length), Avaliable Resources [MemoryMB] -> \(.AvailableResources.MemoryMB), Avaliable Resources [DiskMB] -> \(.AvailableResources.DiskMB), Avaliable Resources [Containers] -> \(.AvailableResources.Containers)"' -r

Cell Id -> 7ca12f7d-737f-47fb-a8bc-91d73e4791cf: L -> 17, Avaliable Resources [MemoryMB] -> 6843, Avaliable Resources [DiskMB] -> 86141, Avaliable Resources [Containers] -> 232
Cell Id -> 9452a3b4-d40c-49f1-9dbf-8d74202f7dff: L -> 14, Avaliable Resources [MemoryMB] -> 5371, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235
Cell Id -> dfc8e214-2e59-4050-9312-1113662ce79f: L -> 14, Avaliable Resources [MemoryMB] -> 4015, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235

More Information

https://github.com/cloudfoundry/cfdot


Categories: Fusion Middleware

Deploying a Spring Boot Application on a Pivotal Container Service (PKS) Cluster on GCP

Wed, 2018-05-09 00:31
I have been "cf pushing" for as long as I can remember so with Pivotal Container Service (PKS) let's walk through the process of deploying a basic Spring Boot Application with a PKS cluster running on GCP.

Few assumptions:

1. PKS is already installed as shown by my Operations Manager UI below



2. A PKS Cluster already exists as shown by the command below

pasapicella@pas-macbook:~$ pks list-clusters

Name        Plan Name  UUID                                  Status     Action
my-cluster  small      1230fafb-b5a5-4f9f-9327-55f0b8254906  succeeded  CREATE

Example:

We will be using this Spring Boot application at the following GitHub URL

  https://github.com/papicella/springboot-actuator-2-demo


1. In this example my Spring Boot application has what is required within my maven build.xml file to allow me to create a Docker image as shown below
  
<!-- tag::plugin[] -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
<!-- end::plugin[] -->

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack</id>
<phase>package</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>

2. Once a docker image was built I then pushed that to Docker Hub as shown below



3. Now we will need a PKS cluster as shown below before we can continue

pasapicella@pas-macbook:~$ pks cluster my-cluster

Name:                     my-cluster
Plan Name:                small
UUID:                     1230fafb-b5a5-4f9f-9327-55f0b8254906
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   cluster1.pks.pas-apples.online
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  192.168.20.10

4. Now we want to wire "kubectl" using a command as follows

pasapicella@pas-macbook:~$ pks get-credentials my-cluster

Fetching credentials for cluster my-cluster.
Context set for cluster my-cluster.

You can now switch between clusters by using:
$kubectl config use-context

pasapicella@pas-macbook:~$ kubectl cluster-info
Kubernetes master is running at https://cluster1.pks.pas-apples.online:8443
Heapster is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-influxdb is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5. Now we are ready to deploy a Spring Boot workload to our cluster. To do that lets download the YAML file below

https://github.com/papicella/springboot-actuator-2-demo/blob/master/lb-withspringboot.yml

Once downloaded create a deployment as follows

$ kubectl create -f lb-withspringboot.yml

pasapicella@pas-macbook:~$ kubectl create -f lb-withspringboot.yml
service "spring-boot-service" created
deployment "spring-boot-deployment" created

6. Now let’s verify our deployment using some kubectl commands as follows

$ kubectl get deployment spring-boot-deployment
$ kubectl get pods
$ kubectl get svc

pasapicella@pas-macbook:~$ kubectl get deployment spring-boot-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
spring-boot-deployment   1         1         1            1           1m

pasapicella@pas-macbook:~$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
spring-boot-deployment-ccd947455-6clwv   1/1       Running   0          2m

pasapicella@pas-macbook:~$ kubectl get svc
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
kubernetes            ClusterIP      10.100.200.1               443/TCP          23m
spring-boot-service   LoadBalancer   10.100.200.137   35.197.187.43   8080:31408/TCP   2m

7. Using the external IP Address we got GCP to expose for us we can access our Spring Boot application on port 8080 as shown below using the external IP address. In this example

http://35.197.187.43:8080/



RESTful End Point

pasapicella@pas-macbook:~$ http http://35.197.187.43:8080/employees/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Wed, 09 May 2018 05:26:19 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "employee": {
            "href": "http://35.197.187.43:8080/employees/1"
        },
        "self": {
            "href": "http://35.197.187.43:8080/employees/1"
        }
    },
    "name": "pas"
}

More Information

Using PKS
https://docs.pivotal.io/runtimes/pks/1-0/using.html

Categories: Fusion Middleware

Using/Verifying the Autoscale service from Apps Manager UI in 5 minutes

Fri, 2018-04-20 04:59
Recently at a customer site I was asked to show how the Autoscale service shipped by default with Pivotal Cloud Foundry would work. Here is how we demoed that in less then 5 minutes.

1. Select an application to Autoscale and click on the "Autoscaling" radio option.


2. Select "Manage Autoscaling" link as shown below.


3. Set the maximum instance limit to "4" and click Save as shown below. You can also set minimum to 1 instance if you want to which will make it easier to verify the scaling of instances as one instance can easily be put under pressure.


4. Now lets set a "Scaling Rule" by clicking on the "Edit" link as shown below.


5. Now lets add a CPU rule by clicking on the "Add" link as shown below.


6. Now define a CPU rule as shown below and click on Save. Don't forget to make it active using the radio option. In this example we use very low threshold BUT it would be better to increase this to something more realistic like 30% and 60% respectively.




Now at this point we are ready to test the Autoscale service BUT to do that we are going to have to create some load. Many different ways to do that but "ab" on my Mac was the fastest way.

8. Create some load on an endpoint for your application to force CPU utilization to increase as shown below

pasapicella@pas-macbook:~$ ab -n 10000 -c 25 http://springboot-actuator-appsmanager-delightful-jaguar.cfapps.io/employees
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking springboot-actuator-appsmanager-delightful-jaguar.cfapps.io (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests

....

9. If you return to Apps Manager UI soon enough you will see that the Autoscale service has fired events to add more instances as per the screen shots below.




It's worth noting that the CF CLI Plugin for Autoscale can also show us what we have defined as as shown below. More information on this plugin is as follows

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

View which applications are using the Autoscaler service:

pasapicella@pas-macbook:~$ cf autoscaling-apps
Presenting autoscaler apps in org apples-pivotal-org / space development as papicella@pivotal.io
OK
Name                              Guid                                   Enabled   Min Instances   Max Instances
springboot-actuator-appsmanager   6c137fea-6a99-4069-8031-a2aa3978804c   true      2               4

View events for an application that has Autoscaler service bound to it:

pasapicella@pas-macbook:~$ cf autoscaling-events springboot-actuator-appsmanager
Presenting autoscaler events for app springboot-actuator-appsmanager for org apples-pivotal-org / space development as papicella@pivotal.io
OK
Time                   Description
2018-04-20T09:56:30Z   Scaled down from 3 to 2 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:55:56Z   Scaled down from 4 to 3 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:54:46Z   Can not scale up. At max limit of 4 instances. Current CPU of 20.75% is above upper threshold of 8.00%.
2018-04-20T09:54:11Z   Can not scale up. At max limit of 4 instances. Current CPU of 30.53% is above upper threshold of 8.00%.
2018-04-20T09:53:36Z   Can not scale up. At max limit of 4 instances. Current CPU of 32.14% is above upper threshold of 8.00%.
2018-04-20T09:53:02Z   Can not scale up. At max limit of 4 instances. Current CPU of 31.51% is above upper threshold of 8.00%.
2018-04-20T09:52:27Z   Scaled up from 3 to 4 instances. Current CPU of 19.59% is above upper threshold of 8.00%.
2018-04-20T09:51:51Z   Scaled up from 2 to 3 instances. Current CPU of 8.99% is above upper threshold of 8.00%.
2018-04-20T09:13:24Z   Scaling from 1 to 2 instances: app below minimum instance limit
2018-04-20T09:13:23Z   Enabled autoscaling.

More Information

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler.html

Categories: Fusion Middleware

Spring Cloud Services CF CLI Plugin

Tue, 2018-04-10 06:27
The Spring Cloud Services plugin for the Cloud Foundry Command Line Interface tool (cf CLI) adds commands for interacting with Spring Cloud Services service instances. It provides easy access to functionality relating to the Config Server and Service Registry; for example, it can be used to send values to a Config Server service instance for encryption or to list all applications registered with a Service Registry service instance.

Here is a simple example of how we can view various bound apps for a Service Registry

1. Install the CF CLI Plugin for Spring Cloud Services using the link below

$ cf add-plugin-repo CF-Community https://plugins.cloudfoundry.org

$ cf install-plugin -r CF-Community "Spring Cloud Services"

2. Now in Apps Manager UI we have a Service Registry instance with some bound micro services as shown below



3. Now we can use the SCS CF CLI Plugin to also get this information

pasapicella@pas-macbook:~$ cf service-registry-list eureka-service
Listing service registry eureka-service in org apples-pivotal-org / space scs-demo as papicella@pivotal.io...
OK

Service instance: eureka-service
Server URL: https://eureka-fcf42b1c-6b85-444c-9a43-fee82f2c68c3.cfapps.io/

eureka app name cf app name    cf instance index zone      status
EDGE-SERVICE    edge-service   0                 cfapps.io UP
COFFEE-SERVICE  coffee-service 0                 cfapps.io UP

The full list of plugin commands are as shown in the screen shot below. 

Note: Use "cf plugins" to get this list once installed


More Information

http://docs.pivotal.io/spring-cloud-services/1-5/common/cf-cli-plugin.html

Categories: Fusion Middleware

Deploying my first Pivotal Container Service (PKS) workload to my PKS cluster

Wed, 2018-04-04 01:15
If you followed along on the previous blogs you would of installed PKS 1.0 on GCP (Google Cloud Platform) and created your first PKS cluster and wired it into kubectl as well as provided an external load balancer as per the previous two posts.

Previous posts:

Install Pivotal Container Service (PKS) on GCP and getting started
http://theblasfrompas.blogspot.com.au/2018/04/install-pivotal-container-service-pks.html

Wiring kubectl / Setup external LB on GCP into Pivotal Container Service (PKS) clusters to get started
http://theblasfrompas.blogspot.com.au/2018/04/wiring-kubectl-setup-external-lb-on-gcp.html

So lets now create our first workload as shown below

1. Download YML demo from here

https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx-lb.yml

2. Deploy as shown below

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl create -f nginx-lb.yml
service "nginx" created
deployment "nginx" created

3. Check current status

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-679dc9c764-8cwzq   1/1       Running   0          22s
nginx-679dc9c764-p8tf2   1/1       Running   0          22s
nginx-679dc9c764-s79mp   1/1       Running   0          22s

4. Wait for External IP address of the nginx service to be assigned

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.100.200.1               443/TCP        17h
nginx        LoadBalancer   10.100.200.143   35.189.23.119   80:30481/TCP   1m

5. In a browser access the K8's workload as follows, using the external IP

http://35.189.23.119



More Info

https://docs.pivotal.io/runtimes/pks/1-0/index.html
Categories: Fusion Middleware

Pages