Fusion Middleware
Loading Australian Football League (AFL) Data into the Elastic Stack with some cool visulaizations
I decided to load some AFL data into the Elastic Stack and do some basic visualisations. I loaded data for all home and away plus finals games since 2017 so four seasons in total. Follow below if you want to do the same.
StepsNote: We already have Elasticsearch cluster running for this demo
$ curl -u "elastic:welcome1" localhost:9200
{
"name" : "node1",
"cluster_name" : "apples-cluster",
"cluster_uuid" : "hJrp2eJaRGCfBt7Zg_-EJQ",
"version" : {
"number" : "7.10.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
First I need the data loaded into the Elastic Stack I did that using Squiggle API which you would do as follows
1. I use HTTPie rather then curl.
http "https://api.squiggle.com.au/?q=games;complete=100" > games-2017-2020.json
2. Now this data itself needs to be altered slightly so I can BULK load it into Elasticsearch cluster and I do that as follows. I use JQ to do this.
cat games-2017-2020.json | jq -c '.games[] | {"index": {"_id": .id}}, .' > converted-games-2017-2020.json
Snippet I what the JSON file now looks like
{"index":{"_id":1}}
{"round":1,"hgoals":14,"roundname":"Round 1","hteamid":3,"hscore":89,"winner":"Richmond","ateam":"Richmond","hbehinds":5,"venue":"M.C.G.","year":2017,"complete":100,"id":1,"localtime":"2017-03-23 19:20:00","agoals":20,"date":"2017-03-23 19:20:00","hteam":"Carlton","updated":"2017-04-15 15:59:16","tz":"+11:00","ascore":132,"ateamid":14,"winnerteamid":14,"is_grand_final":0,"abehinds":12,"is_final":0}
{"index":{"_id":2}}
{"date":"2017-03-24 19:50:00","agoals":15,"ateamid":18,"winnerteamid":18,"hteam":"Collingwood","updated":"2017-04-15 15:59:16","tz":"+11:00","ascore":100,"is_grand_final":0,"abehinds":10,"is_final":0,"round":1,"hgoals":12,"hscore":86,"winner":"Western Bulldogs","ateam":"Western Bulldogs","roundname":"Round 1","hteamid":4,"hbehinds":14,"venue":"M.C.G.","year":2017,"complete":100,"id":2,"localtime":"2017-03-24 19:50:00"}
{"index":{"_id":3}}
{"hscore":82,"ateam":"Port Adelaide","winner":"Port Adelaide","roundname":"Round 1","hteamid":16,"round":1,"hgoals":12,"complete":100,"id":3,"localtime":"2017-03-25 16:35:00","venue":"S.C.G.","hbehinds":10,"year":2017,"ateamid":13,"winnerteamid":13,"updated":"2017-04-15 15:59:16","hteam":"Sydney","tz":"+11:00","ascore":110,"date":"2017-03-25 16:35:00","agoals":17,"is_final":0,"is_grand_final":0,"abehinds":8}
3. Using DevTools with Kibana we can run a query as follows
Question: Get each teams winning games for the season 2020 before finals - Final Ladder
Query:
GET afl_games/_search
{
"size": 0,
"query": {
"bool": {
"must": [
{
"match": {
"year": 2020
}
},
{
"match": {
"is_final": 0
}
}
]
}
},
"aggs": {
"group_by_winner": {
"terms": {
"field": "winner.keyword",
"size": 20
}
}
}
}
Results:
Results
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 153,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"group_by_winner" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Brisbane Lions",
"doc_count" : 14
},
{
"key" : "Port Adelaide",
"doc_count" : 14
},
{
"key" : "Geelong",
"doc_count" : 12
},
{
"key" : "Richmond",
"doc_count" : 12
},
{
"key" : "West Coast",
"doc_count" : 12
},
{
"key" : "St Kilda",
"doc_count" : 10
},
{
"key" : "Western Bulldogs",
"doc_count" : 10
},
{
"key" : "Collingwood",
"doc_count" : 9
},
{
"key" : "Melbourne",
"doc_count" : 9
},
{
"key" : "Greater Western Sydney",
"doc_count" : 8
},
{
"key" : "Carlton",
"doc_count" : 7
},
{
"key" : "Fremantle",
"doc_count" : 7
},
{
"key" : "Essendon",
"doc_count" : 6
},
{
"key" : "Gold Coast",
"doc_count" : 5
},
{
"key" : "Hawthorn",
"doc_count" : 5
},
{
"key" : "Sydney",
"doc_count" : 5
},
{
"key" : "Adelaide",
"doc_count" : 3
},
{
"key" : "North Melbourne",
"doc_count" : 3
}
]
}
}
}
4. Finally using Kibana Lens to easily visualize this data using a Kibana Dasboard
Of course you could do much more plus load more data from Squiggle and with the power of Kibana feel free to create your own visualizations.
More InformationSquiggle API
Getting Started with the Elastic Stack
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html
VMware Solutions Hub - Elastic Cloud on Kubernetes - the official Elasticsearch Operator from the creators
Proud to have worked on this with the VMware Tanzu team and Elastic team to add this to VMware Solution Hub page clearly highlighting what the Elastic Stack on Kubernetes really means.
Do you need to run your Elastic Stack on a certified Kubernetes distribution, bolstered by the global Kubernetes community allowing you to focus on delivering innovative applications powered by Elastic?
If so click below to get started:
https://tanzu.vmware.com/solutions-hub/data-management/elastic
More Information
https://tanzu.vmware.com/solutions-hub/data-management/elastic
How to Become a Kubernetes Admin from the Comfort of Your vSphere
My Talk at VMworld 2020 with Olive power can be found here.
Talk Details
In this session, we will walk through the integration of VMware vSphere and Kubernetes, and how this union of technologies can fundamentally change how virtual infrastructure and operational engineers view the management of Kubernetes platforms. We will demonstrate the capability of vSphere to host Kubernetes clusters internally, allocate capacity to those clusters, and monitor them side by side with virtual machines (VMs). We will talk about how extended vSphere functionality eases the transition of enterprises to running yet another platform (Kubernetes) by treating all managed endpoints—be they VMs, Kubernetes clusters or pods—as one platform. We want to demonstrate that platforms for running modern applications can be facilitated through the intuitive interface of vSphere and its ecosystem of automation tooling
https://www.vmworld.com/en/video-library/search.html#text=%22KUB2038%22&year=2020
Service Accounts suck - why data futures require end to end authentication.
The end of the 19th century
The Island
What is guilt? Who is guilty? Is redemption possible? What is sanity? Do persons have a telos, a destiny, both or neither? Ostrov (The Island) asks and answers all these questions and more.
A film that improbably remains one of the best of this century: "reads" like a 19th century Russian novel; the bleakly stunning visual setting is worth the time to watch alone.
java-cfenv : A library for accessing Cloud Foundry Services on the new Tanzu Application Service for Kubernetes
The Spring Cloud Connectors library has been with us since the launch event of Cloud Foundry itself back in 2011. This library would create the required Spring Beans from bound VCAP_SERVICE ENV variable from a pushed Cloud Foundry Application such as connecting to databases for example. The java buildpack then replaces these bean definitions you had in your application with those created by the connector library through a feature called ‘auto-reconfiguration’
Auto-reconfiguration is great for getting started. However, it is not so great when you want more control, for example changing the size of the connection pool associated with a DataSource.
With the up coming Tanzu Application Service for Kubernetes the original Cloud Foundry buildpacks are now replaced with the new Tanzu Buildpacks which are based on the Cloud Native Buildpacks CNCF Sandbox project. As a result of this auto-reconfiguration is no longer included in java cloud native buildpacks which means auto-configuration for the backing services is no longer available.
So is their another option for this? The answer is "Java CFEnv". This provide a simple API for retrieving credentials from the JSON strings contained inside the VCAP_SERVICES
environment variable.
https://github.com/pivotal-cf/java-cfenv
So if you after exactly how it worked previously all you need to do is add this maven dependancy to your project as shown below.
<dependency>
<groupId>io.pivotal.cfenv</groupId>
<artifactId>java-cfenv-boot</artifactId>
</dependency>
Of course this new library is much more flexible then this and by using the class CfEnv as the entry point to the API for accessing Cloud Foundry environment variables your free to use the Spring Expression Language to invoke methods on the bean of type CfEnv to set properties for example plus more.
For more information read the full blog post as per below
Finally this Spring Boot application is an example of using this new library with an application deployed to the new Tanzu Application Service for Kubernetes.
https://github.com/papicella/spring-book-service
More Information
1. Introducing java-cfenv: A new library for accessing Cloud Foundry Services
2. Java CFEnv GitHub Repo
https://github.com/pivotal-cf/java-cfenv#pushing-your-application-to-cloud-foundry
Getting RocksDB working on Raspberry PI (Unsatisfied linker error when trying to run Kafka Streams)
Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu
$ kubectl get all -n ksm
NAME READY STATUS RESTARTS AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-blvpp 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-mnshg 1/1 Running 0 15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv 1/1 Running 1 15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5 1/1 Running 1 15d
pod/ksm-postgresql-0 1/1 Running 0 15d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ksm-chartmuseum ClusterIP 10.100.200.107 <none> 8080/TCP 15d
service/ksm-ksm-broker LoadBalancer 10.100.200.229 10.195.93.188 80:30086/TCP 15d
service/ksm-ksm-daemon LoadBalancer 10.100.200.222 10.195.93.179 80:31410/TCP 15d
service/ksm-postgresql ClusterIP 10.100.200.213 <none> 5432/TCP 15d
service/ksm-postgresql-headless ClusterIP None <none> 5432/TCP 15d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ksm-chartmuseum 1/1 1 1 15d
deployment.apps/ksm-ksm-broker 2/2 2 2 15d
deployment.apps/ksm-ksm-daemon 2/2 2 2 15d
NAME DESIRED CURRENT READY AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb 1 1 1 15d
replicaset.apps/ksm-ksm-broker-6db696894c 2 2 2 15d
replicaset.apps/ksm-ksm-broker-8645dfcf98 0 0 0 15d
replicaset.apps/ksm-ksm-daemon-587b6fd549 2 2 2 15d
NAME READY AGE
statefulset.apps/ksm-postgresql 1/1 15d
ClusterRoleBinding
using the following YAML:
$ ksm offer list
MARKETPLACE NAME INCLUDED CHARTS VERSION PLANS
rabbitmq rabbitmq 6.18.1 [persistent ephemeral]
mysql mysql 6.14.7 [default]
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME READY STATUS RESTARTS AGE
pod/k-wqo5mubw-mysql-master-0 1/1 Running 0 15d
pod/k-wqo5mubw-mysql-slave-0 1/1 Running 0 15d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k-wqo5mubw-mysql LoadBalancer 10.100.200.12 10.195.93.192 3306:30563/TCP 15d
service/k-wqo5mubw-mysql-slave LoadBalancer 10.100.200.130 10.195.93.191 3306:31982/TCP 15d
NAME READY AGE
statefulset.apps/k-wqo5mubw-mysql-master 1/1 15d
statefulset.apps/k-wqo5mubw-mysql-slave 1/1 15d
Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d22h
- We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
- We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
- We have disabled TLS given this is a demo
$ kubectl get Kafka -n kafka
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
apples-kafka-cluster 3 3 1/1 Running 0 6d22h
$ kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc 3/3 Running 0 6d21h
pod/apples-kafka-cluster-kafka-0 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-1 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-2 2/2 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-0 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-1 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-2 1/1 Running 0 6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apples-kafka-cluster-kafka-0 LoadBalancer 10.100.200.90 10.195.93.200 9094:30362/TCP 6d21h
service/apples-kafka-cluster-kafka-1 LoadBalancer 10.100.200.179 10.195.93.197 9094:32022/TCP 6d21h
service/apples-kafka-cluster-kafka-2 LoadBalancer 10.100.200.155 10.195.93.201 9094:32277/TCP 6d21h
service/apples-kafka-cluster-kafka-bootstrap ClusterIP 10.100.200.77 <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.100.200.58 10.195.93.196 9094:30735/TCP 6d21h
service/apples-kafka-cluster-zookeeper-client ClusterIP 10.100.200.22 <none> 2181/TCP 6d21h
service/apples-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apples-kafka-cluster-entity-operator 1/1 1 1 6d21h
deployment.apps/strimzi-cluster-operator 1/1 1 1 6d23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd 1 1 1 6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778 1 1 1 6d23h
NAME READY AGE
statefulset.apps/apples-kafka-cluster-kafka 3/3 6d21h
statefulset.apps/apples-kafka-cluster-zookeeper 3/3 6d21h 3 1/1 Running 0 6d22h
$ kubectl get KafkaTopic -n kafka
NAME PARTITIONS REPLICATION FACTOR
apples-topic 1 1
@Controller
@Slf4j
public class TopicMessageController {
private KafkaTemplate<String, String> kafkaTemplate;
@Autowired
public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
final private String topicName = "apples-topic";
@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessageAddSuccess", "N");
return "home";
}
@PostMapping("/addentry")
public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){
kafkaTemplate.send(topicName, message);
log.info("Sent single message: " + message);
model.addAttribute("message", message);
model.addAttribute("topicMessageAddSuccess", "Y");
return "home";
}
}
@Controller
@Slf4j
public class TopicConsumerController {
private static ArrayList<String> topicMessages = new ArrayList<String>();
@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessages", topicMessages);
model.addAttribute("topicMessagesCount", topicMessages.size());
return "home";
}
@KafkaListener(topics = "apples-topic")
public void listen(String message) {
log.info("Received Message: " + message);
topicMessages.add(message);
}
}
Sacred Forests
Now these forests are occupied by a handful of eremites. Their lived experience in these patches of natural oasis lends toward a wisdom that we seem to have lost in our industrialized and bustling commercial existence: "“In this world nothing exists alone,” he said. “It’s interconnected. A beautiful tree cannot exist by itself. It needs other creatures. We live in this world by giving and taking. We give CO2 for trees, and they give us oxygen. If we prefer only the creatures we like and destroy others, we lose everything. Bear in mind that the thing you like is connected with so many other things. You should respect that co-existence.” As Alemayehu explained, biodiversity gives rise to a forest’s emergent properties. “If you go into a forest and say, ‘I have ten species, that’s all,’ you’re wrong. You have ten species plus their interactions. The interactions you don’t see: it’s a mystery. This is more than just summing up components, it’s beyond that. These emergent properties of a forest, all the flowering fruits—it’s so complicated and sophisticated. These interactions you cannot explain, really. You don’t see it.”"
In my mind I see these eremites like Zosima in the Brothers Karamzov: "Love to throw yourself on the earth and kiss it. Kiss the earth and love it with an unceasing, consuming love. Love all men, love everything. Seek that rapture and ecstasy. Water the earth with the tears of your joy and love those tears. Don’t be ashamed of that ecstasy, prize it, for it is a gift of God and a great one; it is not given to many but only to the elect." Of course I may be romanticizing these good people's experience in these forest patches - I've never been there and never met any of the eremites that do.
And yet, as the author notes: "The trees’ fate is bound to ours, and our fate to theirs. And trees are nothing if not tenacious." For these Ethiopians, at least, a tree is tied inextricably to their salvation. But isn't it true that for all of us the tree is a source of life and ought to be honored as such?
Stumbled upon this today : Lens | The Kubernetes IDE
I installed it today and was impressed. Below is some screen shots of new Tanzu Application Service running on my Kubernetes cluster using Lens IDE. Simply point it to your Kube Config for the cluster you wish to examine.
On Mac SX it's installed as follows
$ brew cask install lens
More Information
https://github.com/lensapp/lens
Spring Boot Data Elasticsearch using Elastic Cloud on Kubernetes (ECK) on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
In this post I show how to get Elastic Cloud on Kubernetes (ECK) up and running on VMware Tanzu Kubernetes Grid Integrated Edition and how to access it using a Spring Boot Application using Spring Data Elasticsearch.
With ECK, users now have a seamless way of deploying, managing, and operating the Elastic Stack on Kubernetes.
If you have a K8s cluster that's all you need to follow along.
Steps
1. Let's install ECK on our cluster we do that as follows
Note: There is a 1.1 version as the latest BUT I installing a slightly older one here
$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
2. Make sure the operator is up and running as shown below
$ kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/elastic-operator-0 1/1 Running 0 26d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.100.200.55 <none> 443/TCP 26d
NAME READY AGE
statefulset.apps/elastic-operator 1/1 26d
3. We can also see a CRD for Elasticsearch as shown below.
elasticsearches.elasticsearch.k8s.elastic.co
$ kubectl get crd
NAME CREATED AT
apmservers.apm.k8s.elastic.co 2020-06-17T00:37:32Z
clusterlogsinks.pksapi.io 2020-06-16T23:04:43Z
clustermetricsinks.pksapi.io 2020-06-16T23:04:44Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-06-17T00:37:33Z
kibanas.kibana.k8s.elastic.co 2020-06-17T00:37:34Z
loadbalancers.vmware.com 2020-06-16T22:51:52Z
logsinks.pksapi.io 2020-06-16T23:04:43Z
metricsinks.pksapi.io 2020-06-16T23:04:44Z
nsxerrors.nsx.vmware.com 2020-06-16T22:51:52Z
nsxlbmonitors.vmware.com 2020-06-16T22:51:52Z
nsxlocks.nsx.vmware.com 2020-06-16T22:51:51Z
4. We are now ready to create our first Elasticsearch cluster. To do that create a file YML file as shown below
create-elastic-cluster-from-operator.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.7.0
http:
service:
spec:
type: LoadBalancer # default is ClusterIP
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
From the YML a few things to note:
- We are creating two pods for our Elasticsearch cluster
- We are using a K8s LoadBalancer to expose access to the cluster through HTTP
- We are using version 7.7.0 but this is not the latest Elasticsearch version
- We have disabled the use of TLS given this is just a demo
$ kubectl apply -f create-elastic-cluster-from-operator.yaml
6. After about a minute we should have our Elasticsearch cluster running. The following commands show that
$ kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.7.0 Ready 47h
$ kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/quickstart-es-default-0 1/1 Running 0 47h
pod/quickstart-es-default-1 1/1 Running 0 47h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 47h
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 47h
NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 47h
7. Let's deploy a Kibana instance. To do that create a YML as shown below
create-kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana-sample
spec:
version: 7.7.0
count: 1
elasticsearchRef:
name: quickstart
namespace: default
http:
service:
spec:
type: LoadBalancer # default is ClusterIP
8. Apply that as shown below.
$ kubectl apply -f create-kibana.yaml
9. To verify everything is up and running we can run a command as follows
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/kibana-sample-kb-f8fcb88d5-jdzh5 1/1 Running 0 2d
pod/quickstart-es-default-0 1/1 Running 0 2d
pod/quickstart-es-default-1 1/1 Running 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kibana-sample-kb-http LoadBalancer 10.100.200.46 10.195.93.174 5601:32459/TCP 2d
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 2d
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 2d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-sample-kb 1/1 1 1 2d
NAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-sample-kb-f8fcb88d5 1 1 1 2d
NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 2d
10. So to access out cluster we will need to obtain the following which we can do using a script as follows. This was tested on Mac OSX
What do we need?
- Elasticsearch password
- IP address of the LoadBalancer service we created
access.sh
export PASSWORD=`kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'`
export IP=`kubectl get svc quickstart-es-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`
echo ""
echo $IP
echo ""
curl -u "elastic:$PASSWORD" "http://$IP:9200"
echo ""
curl -u "elastic:$PASSWORD" "http://$IP:9200/_cat/health?v"
Output:
10.195.93.137
{
"name" : "quickstart-es-default-1",
"cluster_name" : "quickstart",
"cluster_uuid" : "Bbpb7Pu7SmaQaCmEY2Er8g",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
.....
11. Ideally I would load some data into the Elasticsearch cluster BUT let's do that as part of a sample application using "Spring Data Elasticsearch". Clone the demo project as shown below.
$ git clone https://github.com/papicella/boot-elastic-demo.git
Cloning into 'boot-elastic-demo'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 36 (delta 1), reused 36 (delta 1), pack-reused 0
Unpacking objects: 100% (36/36), done.
12. Edit "./src/main/resources/application.yml" with your details for the Elasticsearch cluster above.
spring:
elasticsearch:
rest:
username: elastic
password: {PASSWORD}
uris: http://{IP}:9200
13. Package as follows
$ ./mvnw -DskipTests package
14. Run as follows
$ ./mvnw spring-boot:run
....
2020-07-14 11:10:11.947 INFO 76260 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-14 11:10:11.954 INFO 76260 --- [ main] c.e.e.demo.BootElasticDemoApplication : Started BootElasticDemoApplication in 2.495 seconds (JVM running for 2.778)
15. Access application using "http://localhost:8080/"
16. If we look at our code we will see the data was loaded into the Elasticsearch cluster using a java class called "LoadData.java". Ideally data should already exist in the cluster but for demo purposes we load some data as part of the Spring Boot Application and clear the data prior to each application run given it's just a demo.
2020-07-14 11:12:33.109 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='OjThSnMBLjyTRl7lZsDL', make='holden', model='commodore', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:33.584 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='OzThSnMBLjyTRl7laMCo', make='holden', model='astra', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.189 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='PDThSnMBLjyTRl7lasCC', make='nissan', model='skyline', bodystyles=[BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.744 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='PTThSnMBLjyTRl7lbMDe', make='nissan', model='pathfinder', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:35.227 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='PjThSnMBLjyTRl7lb8AL', make='ford', model='falcon', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:36.737 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='QDThSnMBLjyTRl7lcMDu', make='ford', model='territory', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.266 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='QTThSnMBLjyTRl7ldsDU', make='toyota', model='camry', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.777 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='QjThSnMBLjyTRl7leMDk', make='toyota', model='corolla', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.285 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='QzThSnMBLjyTRl7lesDj', make='kia', model='sorento', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.800 INFO 76277 --- [ main] com.example.elastic.demo.LoadData : Pre loading Car{id='RDThSnMBLjyTRl7lfMDg', make='kia', model='sportage', bodystyles=[BodyStyle{type='4-door'}]}
LoadData.java
package com.example.elastic.demo;
import com.example.elastic.demo.indices.BodyStyle;
import com.example.elastic.demo.indices.Car;
import com.example.elastic.demo.repo.CarRepository;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;
import static java.util.Arrays.asList;
@Configuration
@Slf4j
public class LoadData {
@Bean
public CommandLineRunner initElasticsearchData(CarRepository carRepository) {
return args -> {
carRepository.deleteAll();
log.info("Pre loading " + carRepository.save(new Car("holden", "commodore", asList(new BodyStyle("2-door"), new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("holden", "astra", asList(new BodyStyle("2-door"), new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "skyline", asList(new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "pathfinder", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "falcon", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "territory", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "camry", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "corolla", asList(new BodyStyle("2-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sorento", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sportage", asList(new BodyStyle("4-door")))));
};
}
}
17. Our CarRepository interface is defined as follows
CarRepository.java
package com.example.elastic.demo.repo;
import com.example.elastic.demo.indices.Car;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;
@Repository
public interface CarRepository extends ElasticsearchRepository <Car, String> {
Page<Car> findByMakeContaining(String make, Pageable page);
}
18. So let's also via this data using "curl" and Kibana as shown below.
curl -X GET -u "elastic:{PASSWORD}" "http://{IP}:9200/vehicle/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"sort": [
{ "_id": "asc" }
]
}
'
Output:
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 10,
"relation" : "eq"
},
"max_score" : null,
"hits" : [
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "OjThSnMBLjyTRl7lZsDL",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "holden",
"model" : "commodore",
"bodystyles" : [
{
"type" : "2-door"
},
{
"type" : "4-door"
},
{
"type" : "5-door"
}
]
},
"sort" : [
"OjThSnMBLjyTRl7lZsDL"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "OzThSnMBLjyTRl7laMCo",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "holden",
"model" : "astra",
"bodystyles" : [
{
"type" : "2-door"
},
{
"type" : "4-door"
}
]
},
"sort" : [
"OzThSnMBLjyTRl7laMCo"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "PDThSnMBLjyTRl7lasCC",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "nissan",
"model" : "skyline",
"bodystyles" : [
{
"type" : "4-door"
}
]
},
"sort" : [
"PDThSnMBLjyTRl7lasCC"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "PTThSnMBLjyTRl7lbMDe",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "nissan",
"model" : "pathfinder",
"bodystyles" : [
{
"type" : "5-door"
}
]
},
"sort" : [
"PTThSnMBLjyTRl7lbMDe"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "PjThSnMBLjyTRl7lb8AL",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "ford",
"model" : "falcon",
"bodystyles" : [
{
"type" : "4-door"
},
{
"type" : "5-door"
}
]
},
"sort" : [
"PjThSnMBLjyTRl7lb8AL"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "QDThSnMBLjyTRl7lcMDu",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "ford",
"model" : "territory",
"bodystyles" : [
{
"type" : "5-door"
}
]
},
"sort" : [
"QDThSnMBLjyTRl7lcMDu"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "QTThSnMBLjyTRl7ldsDU",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "toyota",
"model" : "camry",
"bodystyles" : [
{
"type" : "4-door"
},
{
"type" : "5-door"
}
]
},
"sort" : [
"QTThSnMBLjyTRl7ldsDU"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "QjThSnMBLjyTRl7leMDk",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "toyota",
"model" : "corolla",
"bodystyles" : [
{
"type" : "2-door"
},
{
"type" : "5-door"
}
]
},
"sort" : [
"QjThSnMBLjyTRl7leMDk"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "QzThSnMBLjyTRl7lesDj",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "kia",
"model" : "sorento",
"bodystyles" : [
{
"type" : "5-door"
}
]
},
"sort" : [
"QzThSnMBLjyTRl7lesDj"
]
},
{
"_index" : "vehicle",
"_type" : "_doc",
"_id" : "RDThSnMBLjyTRl7lfMDg",
"_score" : null,
"_source" : {
"_class" : "com.example.elastic.demo.indices.Car",
"make" : "kia",
"model" : "sportage",
"bodystyles" : [
{
"type" : "4-door"
}
]
},
"sort" : [
"RDThSnMBLjyTRl7lfMDg"
]
}
]
}
}
Kibana
Obtain Kibana HTTP IP as shown below and login using username "elastic" and password we obtained previously.
$ kubectl get svc kibana-sample-kb-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
10.195.93.174
Finally maybe you want to deploy the application to Kubernetes. To do that take a look at Cloud Native Buildpacks CNCF project and/or Tanzu Build Service to turn your code into a Container Image stored in a registry.
More Information
Spring Data Elasticsearch
https://spring.io/projects/spring-data-elasticsearch
VMware Tanzu Kubernetes Grid Integrated Edition Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/index.html
Multi-Factor Authentication (MFA) using OKTA with Spring Boot and Tanzu Application Service
Steps
1. Clone the existing repo as shown below
$ git clone https://github.com/papicella/mfa-boot-fsi
Cloning into 'mfa-boot-fsi'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 2), reused 47 (delta 2), pack-reused 0
Unpacking objects: 100% (47/47), done.
2. Create a free account of https://developer.okta.com/
Once created login to the dev account. Your account URL will look like something as follows
https://dev-{ID}-admin.okta.com
3. You will need your default authorization server settings. From the top menu in the developer.okta.com dashboard, go to API -> Authorization Servers and click on the default server
You will need this data shortly. Image above is an example those details won't work for your own setup.
4. From the top menu, go to Applications and click the Add Application button. Click on the Web button and click Next. Name your app whatever you like. I named mine "pas-okta-springapp". Otherwise the default settings are fine. Click Done.
From this screen shot you can see that the default's refer to localhost which for DEV purposes is fine.
You will need the Client ID and Client secret from the final screen so make a note of these
5. Edit the "./mfa-boot-fsi/src/main/resources/application-DEV.yml" to include the details as per #3 and #4 above.
You will need to edit
- issuer
- client-id
- client-secret
application-DEV.yaml
spring:
security:
oauth2:
client:
provider:
okta:
user-name-attribute: email
okta:
oauth2:
issuer: https://dev-213269.okta.com/oauth2/default
redirect-uri: /authorization-code/callback
scopes:
- profile
- openid
client-id: ....
client-secret: ....
6. In order to pick up this application-DEV.yaml we have to set the spring profile correctly. That can be done using a JVM property as follows.
-Dspring.profiles.active=DEV
In my example I use IntelliJ IDEA so I set it on the run configurations dialog as follows
7. Finally let's setup MFA which we do as follows by switching to classic UI as shown below
8. Click on Security -> Multifactor and setup another Multifactor policy. In the screen shot below I select "Email Policy" and make sure it is "Required" along with the default policy
9. Now run the application making sure you set the spring active profile to DEV.
...
2020-07-10 13:34:57.528 INFO 55990 --- [ restartedMain] pas.apa.apj.mfa.demo.DemoApplication : The following profiles are active: DEV
...
10. Navigate to http://localhost:8080/
11. Click on the "Login" button
Verify you are taken to the default OKTA login page
12. Once logged in the second factor should then ask for a verification code to be sent to your email. Press the "Send me the code" button
13. Once you enter the code sent to your email you will be granted access to the application endpoints
14. Finally to deploy the application to Tanzu Application Service perform these steps below
- Create a manifest.yaml as follows
---
applications:
- name: pas-okta-boot-app
memory: 1024M
buildpack: https://github.com/cloudfoundry/java-buildpack.git#v4.16
instances: 2
path: ./target/demo-0.0.1-SNAPSHOT.jar
env:
JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+}}'
- Package the application as follows
$ ./mvnw -DskipTests package
- In the DEV OTKA console create a second application which will be for the deployed application on Tanzu Application Service which refers to it's FQDN rather then localhost as shown below
- Edit "application.yml" to ensure you set the following correctly for the new "Application" we created above.
You will need to edit
- issuer
- client-id
- client-secret
That's It!!!!
Modern Times
I think many times the term "modernism" is conflated with "contemporary" in casual use. But by "modernism" in this case I mean, first and foremost, a mode of artistic exploration that breaks with prior, established forms, be they “rules” or aesthetic norms, seeing them as having exhausted their capacity to express themselves. Of course, these also involve the introduction of new forms and rationalizations for those shifts - ways to capture meaning in a way that carries forward a fresh energy of its own (at least for a time), often with an inchoate nod to "progress". I suppose the most recent manifestation of modernism may be transhumanism, but this obsession with the form seemed to have pervaded so much of the 20th century - in painting the emergence of cubism to the obsessiveness with abstraction (which finally gave way to a resurgence of figurative painting), in literary theory the move from structuralism to post structuralism and the disintegration into deconstruction. Poetry as well: proto modernists like Emily Dickinson paved the way for not only "high modernists" like Eliot but a full range of form-experimental poets, from ee cummings to BH Fairchild. These were not always entirely positive developments - I’ll take Miles Davis’s Kind of Blue over Bitches Brew any day of the week. But then again, I’ll take Dostoevsky over Tolstoy 10 times out of 10. In some sense, we have to take these developments as they come and eventually sift the wheat from the chaff.
Which brings me back to Pessoa, one of the literary giants of the Portuguese language. His Book of Disquiet was a lifelong project, which features a series - a seemingly never ending series - of reflections by a number of "heteronym" personalities he developed. The paragraphs are often redundant and the themes seem to run on, making for a difficult book to read in long sittings. As a consequence I've been pecking away at it slowly. It becomes more difficult as time goes by for another reason: the postured aloofness to life seems sometimes fake, sometimes pretentious: more what one would expect from an 18 year old than a mature writer who has mastered his craft. And yet Pessoa himself seems at times to long for a return to immaturity: "My only regret is that I am not a child, for that would allow me to believe in my dreams and believe that I am not mad, which would allow me to distance my soul from all those who surround me."
But still, the writing at times is simply gorgeous. There's not so much beauty in what Pessoa says as in how he says it. He retains completely the form of language, but deliberately evacuates the novel of its structure. What we are left with are in some sense "micro-essays" that sometimes connect and at other times disassociate. Taken as words that invoke meaning, they are often depressing, sometimes nonsensical. Taken as words that invoke feeling - a feeling of language arranged to be something more than just words - they can be spectacular.
The tension between the words as meaning and words as expression is impossible to escape: "Nothing satisfies me, nothing consoles me, everything—whether or not it has ever existed—satiates me. I neither want my soul nor wish to renounce it. I desire what I do not desire and renounce what I do not have. I can be neither nothing nor everything: I’m just the bridge between what I do not have and what I do not want.” What does one make of this when considered as creed? Unlikely anything positive. Yet this pericope is rendered in a particularly dreamy sort of way that infects the reader when immersed in the dream-like narrative in which it is situated. It's almost inescapable.
Few novels have made me pause for such extended periods of time to ponder not so much what the author has to say but how he says it. It's like a kind of poetry rendered without a poem.
---
A nod to New Directions Publishing, by the way, for making this project happen. Their edition of Disquiet I suspect will be seen as definitive for some time.
GitHub Actions to deploy Spring Boot application to Tanzu Application Service for Kubernetes
Steps
Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below.
$ kapp list
Target cluster 'https://35.189.13.31' (nodes: gke-tanzu-gke-lab-f67-np-f67b23a0f590-abbca04e-5sqc, 8+)
Apps in namespace 'default'
Name Namespaces Lcs Lca
certmanager-cluster-issuer (cluster) true 8d
externaldns (cluster),external-dns true 8d
harbor-cert harbor true 8d
tas (cluster),cf-blobstore,cf-db,cf-system, false 8d
cf-workloads,cf-workloads-staging,istio-system,kpack,
metacontroller
tas4k8s-cert cf-system true 8d
Lcs: Last Change Successful
Lca: Last Change Age
5 apps
Succeeded
The demo exists on GitHub using the following URL, to follow along simply use your own GitHub repository making the changes as detailed below. The example below is for a Spring Boot application so your YAML file for the action would differ for non Java applications but there are many starter templates to choose from for other programming languages.
https://github.com/papicella/github-boot-demo
GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub
1. Create a folder at the root of your project source code as follows
$ mkdir ".github/workflows"
2. In ".github/workflows" folder, add a .yml or .yaml file for your workflow. For example, ".github/workflows/maven.yml"
3. Use the "Workflow syntax for GitHub Actions" reference documentation to choose events to trigger an action, add actions, and customize your workflow. In this example the YML "maven.yml" looks as follows.
maven.yml
name: Java CI with Maven and CD with CF CLI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up JDK 11.0.5
uses: actions/setup-java@v1
with:
java-version: 11.0.5
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: push to TAS4K8s
env:
CF_USERNAME: ${{ secrets.CF_USERNAME }}
CF_PASSWORD: ${{ secrets.CF_PASSWORD }}
run: |
curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
./cf api https://api.tas.lab.pasapples.me --skip-ssl-validation
./cf auth $CF_USERNAME $CF_PASSWORD
./cf target -o apples-org -s development
./cf push -f manifest.yaml
Few things here around the YML Workflow syntax for the GitHub Action above
- We are using a maven action sample which will FIRE on a push or pull request on the master branch
- We are using JDK 11 rather then Java 8
- 3 Steps exists here
- Setup JDK
- Maven Build/Package
- CF CLI Push to TAS4K8s using the built JAR artifact from the maven build
- We download the CF CLI into ubuntu image
- We have masked the username and password using Secrets
4. Next in the project root add a manifest YAML for deployment to TAS4K8s
- Add a manifest.yaml file in the project root to deploy our simple Spring boot RESTful application
---
applications:
- name: github-TAS4K8s-boot-demo
memory: 1024M
instances: 1
path: ./target/demo-0.0.1-SNAPSHOT.jar
5. Now we need to add Secrets to the Github repo which are referenced in out "maven.yml" file. In our case they are as follows.
- CF_USERNAME
- CF_PASSWORD
6. At this point that is all we need to test our GitHub Action. Here in IntelliJ IDEA I issue a commit/push to trigger the GitHub action
7. If all went well using "Actions" tab in your GitHub repo will show you the status and logs as follows
8. Finally our application will be deployed to TAS4K8s as shown below and we can invoke it using HTTPie or CURL for example
$ cf apps
Getting apps in org apples-org / space development as pas...
OK
name requested state instances memory disk urls
github-TAS4K8s-boot-demo started 1/1 1G 1G github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
my-springboot-app started 1/1 1G 1G my-springboot-app.apps.tas.lab.pasapples.me
test-node-app started 1/1 1G 1G test-node-app.apps.tas.lab.pasapples.me
$ cf app github-TAS4K8s-boot-demo
Showing health and status for app github-TAS4K8s-boot-demo in org apples-org / space development as pas...
name: github-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
last uploaded: Thu 18 Jun 12:03:19 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-18T02:03:32Z 0.2% 136.5M of 1G 0 of 1G
$ http http://github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Thu, 18 Jun 2020 02:07:39 GMT
server: istio-envoy
x-envoy-upstream-service-time: 141
Thu Jun 18 02:07:39 GMT 2020
More Information
Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/
GitHub Actions
https://github.com/features/actions
GitHub Marketplace - Actions
https://github.com/marketplace?type=actions
Deploying a Spring Boot application to Tanzu Application Service for Kubernetes using GitLab
Steps
Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below
$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)
Apps in namespace 'default'
Name Namespaces Lcs Lca
cf (cluster),build-service,cf-blobstore,cf-db, true 10d
cf-system,cf-workloads,cf-workloads-staging,
istio-system,kpack,metacontroller
Lcs: Last Change Successful
Lca: Last Change Age
1 apps
Succeeded
Ensure you have GitLab running. In this example it's installed on a Kubernetes cluster but it doesn't have to be. All that matters here is that GitLab can access the API endpoint of your TAS4K8s install
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
gitlab gitlab 2 2020-05-15 13:22:15.470219 +1000 AEST deployed gitlab-3.3.4 12.10.5
1. First let's create a basic Springboot application with a simple RESTful endpoint as shown below. It's best to use the Spring Initializer to create this application. I simply used the web and lombok dependancies as shown below.
Note: Make sure you select java version 11.
Spring Initializer Web Interface
Using built in Spring Initializer in IntelliJ IDEA.
Here is my simple RESTful controller which simply output's todays date.
package com.example.demo;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.Date;
@RestController
@Slf4j
public class FrontEnd {
@GetMapping("/")
public String index () {
log.info("An INFO Message");
return new Date().toString();
}
}
2. Create an empty project in GitLab using the name "gitlab-TAS4K8s-boot-demo"
3. At this point this add our project files from step #1 above into the empty GitLab project repository. We do that as follows.
$ cd "existing project folder from step #1"
$ git init
$ git remote add origin http://gitlab.ci.run.haas-236.pez.pivotal.io/root/gitlab-tas4k8s-boot-demo.git
$ git add .
$ git commit -m "Initial commit"
$ git push -u origin master
Once done we now have out GitLab project repository with the files we created as part of the project setup
4. It's always worth running the code locally just to make sure it's working so if you like you can do that as follows
RUN:
$ ./mvnw spring-boot:run
CURL:
$ curl http://localhost:8080/
Tue Jun 16 10:46:26 AEST 2020
HTTPie:
papicella@papicella:~$
papicella@papicella:~$
papicella@papicella:~$ http :8080/
HTTP/1.1 200
Connection: keep-alive
Content-Length: 29
Content-Type: text/plain;charset=UTF-8
Date: Tue, 16 Jun 2020 00:46:40 GMT
Keep-Alive: timeout=60
Tue Jun 16 10:46:40 AEST 2020
5. Our GitLab project as no pipelines defined so let's create one as follows in the project root directory using the default pipeline name ".gitlab-ci.yml"
image: openjdk:11-jdk
stages:
- build
- deploy
build:
stage: build
script: ./mvnw package
artifacts:
paths:
- target/demo-0.0.1-SNAPSHOT.jar
production:
stage: deploy
script:
- curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
- ./cf auth $CF_USERNAME $CF_PASSWORD
- ./cf target -o apples-org -s development
- ./cf push -f manifest.yaml
only:
- master
Note: We have not defined any tests in our pipeline which we should do but we haven't written any in this example.
6. For this pipeline to work we will need to do the following
- Add a manifest.yaml file in the project root to deploy our simple Springboot RESTful application
---
applications:
- name: gitlab-TAS4K8s-boot-demo
memory: 1024M
instances: 1
path: ./target/demo-0.0.1-SNAPSHOT.jar
- Alter the API endpoint to match your TAS4K8s endpoint
- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
- Alter the target to use your ORG and SPACE within TAs4K8s.
- ./cf target -o apples-org -s development
This command shows you what your current CF CLI is targeted to so you can ensure you edit it with correct details
$ cf target
api endpoint: https://api.system.run.haas-236.pez.pivotal.io
api version: 2.150.0
user: pas
org: apples-org
space: development
7. For the ".gitlab-ci.yml" to work we need to define two ENV variables for our username and password. Those two are as follows which is our login credentials to TAS4K8s
- CF_USERNAME
- CF_PASSWORD
To do that we need to navigate to "Project Settings -> CI/CD - Variables" and fill in the appropriate details as shown below
8. Now let's add the two new files using git , add a commit message and push the changes
$ git add .gitlab-ci.yml
$ git add manifest.yaml
$ git commit -m "add pipeline configuration"
$ git push -u origin master
9. Navigate to GitLab UI "CI/CD -> Pipelines" and we should see our pipeline starting to run
10. If everything went well!!!
11. Finally our application will be deployed to TAS4K8s as shown below
$ cf apps
Getting apps in org apples-org / space development as pas...
OK
name requested state instances memory disk urls
gitlab-TAS4K8s-boot-demo started 1/1 1G 1G gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
gitlab-tas4k8s-demo started 1/1 1G 1G gitlab-tas4k8s-demo.apps.system.run.haas-236.pez.pivotal.io
test-node-app started 1/1 1G 1G test-node-app.apps.system.run.haas-236.pez.pivotal.io
$ cf app gitlab-TAS4K8s-boot-demo
Showing health and status for app gitlab-TAS4K8s-boot-demo in org apples-org / space development as pas...
name: gitlab-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
last uploaded: Tue 16 Jun 11:29:03 AEST 2020
stack:
buildpacks:
type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-16T01:29:16Z 0.1% 118.2M of 1G 0 of 1G
12. Access it as follows.
$ http http://gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Tue, 16 Jun 2020 01:35:28 GMT
server: istio-envoy
x-envoy-upstream-service-time: 198
Tue Jun 16 01:35:28 GMT 2020
Of course if you wanted to create an API like service you could use the source code at this repo rather then the simple demo shown here using OpenAPI.
https://github.com/papicella/spring-book-service
More Information
Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/
GitLab
https://about.gitlab.com/
Installing a UI for Tanzu Application Service for Kubernetes
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-console console 1 2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
$ kubectl get all -n console
NAME READY STATUS RESTARTS AGE
pod/stratos-0 2/2 Running 0 34m
pod/stratos-config-init-1-mxqbw 0/1 Completed 0 34m
pod/stratos-db-7fc9b7b6b7-sp4lf 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-console-mariadb ClusterIP 10.100.200.65 <none> 3306/TCP 34m
service/my-console-ui-ext LoadBalancer 10.100.200.216 10.195.75.164 443:32286/TCP 34m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/stratos-db 1/1 1 1 34m
NAME DESIRED CURRENT READY AGE
replicaset.apps/stratos-db-7fc9b7b6b7 1 1 1 34m
NAME READY AGE
statefulset.apps/stratos 1/1 34m
NAME COMPLETIONS DURATION AGE
job.batch/stratos-config-init-1 1/1 28s 34m
Unity and Difference
Now I have always identified with this comment of Dostoevsky: "I will tell you that I am a child of this century, a child of disbelief and doubt. I am that today and will remain so until the grave": sometimes more strongly than others. But myths are not about what we believe is "real" at any point in time. The meaning of these symbols I think says something for all of us today - particularly in the United States: that the essence of humanity may be best realized in a unity in difference that can only be realized through self-offering love. In political terms we are all citizens of one country and our obligation as a society is to care for each other. This much ought to be obvious - we cannot exclude one race, one economic class, one geography, one party, from mutual care. The whole point of our systems, in fact, ought to be to realize, however imperfectly, some level of that mutual care, of mutual up-building and mutual support.
That isn't happening today. Too often this we are engaged in the opposite - mutual tearing down and avoiding our responsibilities to each other. I wish there was a magic fix for this: it clearly has been a problem that has plagued our history for a long, long time. The one suggestion I can make is to find a way to reach out across boundaries with care on a day by day basis. It may seem like a person cannot make a difference. No individual drop of rain thinks it is responsible for the flood.
Pages
