Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Wiki Page: Using Oracle NoSQL Database with a Multi Node Kubernetes Cluster

$
0
0
Written by Deepak Vohra Oracle NoSQL database is a key/value database. Running Oracle NoSQL database in a standalone Docker container has the limitations of not being able to run in a distributed cluster, not being able create a service using which an external client could invoke the database, and not being able to scale the database. Kubernetes cluster manager overcomes all these limitations. In this tutorial we shall run a cluster of Docker containers for Oracle NoSQL database managed by Kubernetes. Setting the Environment The following software is required for this tutorial. Docker Engine Kubernetes Kubectl Docker Image for Oracle NoSQL Database (oracle/nosql) The latest oracle/nosql image is based on Oracle NoSQL database version 3.5.2 Community Edition. Create two Amazon EC2 Ubuntu instances, one for the Kubernetes Master Node and one for the Worker node. SSH login to the Kubernetes Master Node and the Kubernetes Worker Node using the Public IP Address of the EC2 instance. ssh -i "docker.pem" ubuntu@52.90.115.30 If not already installed for an earlier tutorial . , install Kubernetes on a multi-node cluster. using the Public IP Address of the EC2 instance. Start the Docker instance and verify its status with the following commands. Docker should be running. sudo service docker start sudo service docker status In the next sections we shall discuss creating a Docker container cluster imperatively and declaratively. Running Oracle NoSQL Database Imperatively We shall be using the oracle/nosql Docker image. Run the following kubectl run command to create an Oracle NoSQL Database KV Store cluster consisting of 2 replicas with container port set as 5000. kubectl run oranosql --image=oracle/nosql --replicas=2 --port=5000 The Dockerfile specifies the following command, which is run when the container cluster is created. CMD ["java", "-jar", "lib/kvstore.jar", "kvlite"] A replication controller (rc) called oranosql gets created. The rc selector to select Pods is run=oranosql . The number of replicas is 2. The Pod label, though not listed, is also set to run=oranosql . List the replication controllers. kubectl get rc The oranosql replication controller gets listed. List the Pods. kubectl get pods Two pods (prefixed oranosql ) get listed for Oracle NoSQL Database. The two Pods should be listed with STATUS->Running and READY->(1/1). Creating a Service To expose the replication controller oranosql as a Kubernetes service on port 5000 run the following kubectl expose rc command. kubectl expose rc oranosql --port=5000 --type=LoadBalancer The oranosql service gets created. The service selector is run= oranosql is the same as the replication controller selector. Subsequently list the Kubernetes services. kubectl get services The service oranosql exposed on TCP port 5000 gets listed. Describe the service. kubectl describe svc oranosql The service name, namespace, labels, selector, type, IP, and endpoints get listed. One endpoint is listed for each Pod replica. Scaling the Database Run the kubectl scale command to scale the replicas. For example, increase the number of replicas from 2 to 4. kubectl scale rc oranosql --replicas=4 An output of “scaled” implies the rc got scaled. Subsequently list the running Pods. The following Pods lists the node on which a Pod is running. kubectl get pods –o wide Four Pods running Oracle NoSQL Database on each get listed. Some Pods are running on the Kubernetes Master Node and some on the Worker node. Subsequently, describe the service oranosql again. kubectl describe svc oranosql Because the cluster has been scaled up to four replicas, 4 endpoints get listed. Deleting the Replication Controller and Service In subsequent sections we shall create a cluster of Oracle NoSQL Database instances declaratively using definition files. As we shall be using the same configuration parameters, delete the oranosql replication controller and the oranosql service. kubectl delete rc oranosql kubectl delete svc oranosql Both the replication controller and the service get deleted. Get the rc and services. kubectl get rc kubectl get services The oranosql rc and oranosql service do not get listed. Running a Cluster Declaratively Next, create a cluster running Oracle NoSQL Database declaratively using definition files for a replication controller, and service. Either the YAML or the JSON format could be used for the definition files; we have used the YAML format. Create definition files oranosql-service.yaml , and oranosql-rc.yaml . Creating a Service Next, create a service for cluster declaratively using the oranosql-service.yaml . The service definition file specifies a port to expose the service at as 5000, labels for the service and a selector to match the Pods to be managed by the service. The selector setting of app: oranosql translates to service selector app=oranosql . Copy the following listing to the oranosql-service.yaml . --- apiVersion: v1 kind: Service metadata: labels: app: oranosql name: oranosql spec: ports: - port: 5000 targetPort: 5000 selector: app: oranosql type: LoadBalancer The oranosql-service.yaml may be created in a vi editor and saved with :wq. Run the kubectl create command to create a service. kubectl create -f oranosql-service.yaml The oranosql service gets created. Subsequently list the services. kubectl get services The oranosql service gets listed. Get the service endpoints. kubectl get endpoints The service description does not include any service endpoints because the service selector does not match the label on any Pod already running. We have not yet created a replication controller. Creating a Replication Controller Next, we shall create a replication controller with a matching label. The replication controller definition file called oranosql-rc.yaml defines a replication controller. For the replication controller to manage the Pods defined in the spec mapping the key:value expression of the selector in the replication controller must match a label in the Pod template mapping. Each has been set to app: oranosql . The template->spec->containers mapping defines the containers in the Pod with only the Oracle NoSQL Database container "oracle/nosql" defined. The container port is set to 5000. --- apiVersion: v1 kind: ReplicationController metadata: labels: app: oranosql-rc name: oranosql spec: replicas: 2 selector: app: oranosql template: metadata: labels: app: oranosql spec: containers: - image: oracle/nosql name: oranosql ports: - containerPort: 5000 The oranosql-rc.yaml file may be created in vi editor and saved with :wq. Next, run the kubectl create command to create a replication controller from the definition file oranosql-rc.yaml . kubectl create -f oranosql-rc.yaml The replication controller gets created. List the replication controllers. kubectl get rc The oranosql replication controller gets listed. Describe the replication controller oranosql . kubectl describe rc oranosql The rc description includes the name, namespace, Docker image, selector, labels, replicas, and Pod status. Describe the service “oranosql”. kubectl describe svc oranosql Service description includes name, namespace, labels, selector, type, IP, Port and two service endpoints. List the service endpoints. kubectl get endpoints The two endpoints get listed. Finding Pod Nodes As we are running a multi-node cluster it may be of interest to find which Pod is running on which node. To find pod->node allocation run the following command. kubectl get pods –o wide The Pods get listed with an additional column called NODE in the result for the node on which the pod is running. One Pod is running on the master node and one on the worker node. Scaling the Database Next, we shall scale the cluster from 2 to 4 Pods. The kubectl scale command with replication controller is used to scale the number of Pods. kubectl scale rc oranosql --replicas=4 The Pod replicas get scaled. Subsequently list the Pods. kubectl get pods Four Pod replicas get listed. The preceding command may have to be run multiple times if required, to list the new Pod replica as running and ready. To find which Pod runs on which node run the following command. kubectl get pods –o wide The node on which each Pod runs gets listed. Two Pods are running on the master node and two on the worker node. Describe the service again. kubectl describe svc oranosql Four endpoints should get listed in addition to a single IP address. The Pod replicas may be scaled down as well. For example, scale down to 2 replicas. kubectl scale rc oranosql --replicas=2 The Pod cluster gets scaled down to 2. Subsequently only 2 Pods get listed. Starting the Interactive Shell Next, we shall start an interactive tty (shell) to connect to the software, which is Oracle NoSQL Database, running in a Docker container. List the Docker containers. sudo docker ps Obtain the Docker container id for one of the containers based on the oracle/nosql image, for exmaple 88b9e7d6cb4b . Start a bash shell using the container id. sudo docker exec -it 88b9e7d6cb4b bash A bash shell gets started. The KV Store may be accessed from the bash shell. List the file/directory structure with ls –l. The kvstore.jar jar file for the KV Store and the kvclient.jar file for the KV Client are in the lib directory. Connecting to KV Store Run the following command to start KV Client. sudo docker run --rm -ti --link kvlite:store oracle/nosql java -jar lib/kvstore.jar runadmin -host store -port 5000 -store kvstore At the kv> prompt run the ping command to ping the KV Store. Adding Data to KV Store The put kv command is used put key/value data in the KV store. Add 7 key/value pairs. The key is specified with the –key parameter and the value with the –value parameter. put kv -key /log1 -value 'Server state changed to STANDBY' put kv -key /log2 -value 'Server state changed to STARTING' put kv -key /log3 -value 'Server state changed to ADMIN' put kv -key /log4 -value 'Server state changed to RESUMING' put kv -key /log5 -value 'Started WebLogic AdminServer' put kv -key /log6 -value 'Server state changed to RUNNING' put kv -key /log7 -value 'Server started in RUNNING mode' Seven key/value records get added. A single key/value record may be obtained with the get kv command with the key to get specified with –key parameter. To get all key/value records run the following command. get kv –all A single record gets fetched and subsequently all records get output. Exiting the Interactive Shell To logout from KV CLI run the “exit” command and to exit the interactive terminal run the “exit” command also. Listing Logs List the logs for one of the Pods. kubectl logs oranosql-0w59m The logs generated list that the kvlite store is created with root as ./kvroot , store as kvstore , host as the Pod name, port as 5000 and admin port as 5001. In this tutorial we used Kubernetes to create and orchestrate an Oracle NoSQL Database Docker image ( oracle/nosql ) based Docker container cluster. We discussed both the imperative and declarative approaches to creating and managing a cluster. We scaled the cluster and also used a Docker container to start KV CLI and add KV Store data.

Viewing all articles
Browse latest Browse all 1814

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>