@@ -5,24 +5,27 @@ | |||
[](https://travis-ci.org/open-intelligence/openi) | |||
[](https://coveralls.io/github/open-intelligence/openi?branch=master) | |||
[简体中文](./README_zh.md) | |||
## Introduction | |||
OPENI is a cluster management tool and resource scheduling platform, initially designed and jointly developed by [Microsoft Research (MSR)](https://www.microsoft.com/en-us/research/group/systems-research-group-asia/), [Microsoft Search Technology Center (STC)](https://www.microsoft.com/en-us/ard/company/introduction.aspx), [Peking University](http://eecs.pku.edu.cn/EN/), [Xi'an Jiaotong University](http://www.aiar.xjtu.edu.cn/), [Zhejiang University](http://www.cesc.zju.edu.cn/index_e.htm), and [University of Science and Technology of China](http://eeis.ustc.edu.cn/), and [maintained by PCL LAB](http://www.pcl.ac.cn/), [Peking University](http://idm.pku.edu.cn/), [University of Science and Technology of China | |||
](https://www.ustc.edu.cn/)and [AITISA](http://www.aitisa.org.cn/). | |||
The platform incorporates some mature design that has a proven track record in large scale Microsoft production environment, and is tailored primarily for academic and research purpose. | |||
Openi-octopus is a cluster management tool and resource scheduling platform jointly designed and developed by Peking University, Xi'an Jiaotong University, Zhejiang University and China University of science and technology, and maintained by Pengcheng laboratory, Peking University, China University of science and technology and aitisa. The platform combines some mature designs that perform well in large-scale production environment, and is mainly designed to improve the efficiency of academic research and reproduce academic research results. | |||
OPENI supports AI jobs (e.g., deep learning jobs) running in a GPU cluster. The platform provides a set of interfaces to support major deep learning frameworks: CNTK, TensorFlow, etc. The interface provides great extensibility: new deep learning framework (or other type of workload) can be supported by the interface with a few extra lines of script and/or Python code. | |||
### Feature | |||
OPENI supports GPU scheduling, a key requirement of deep learning job. | |||
For better performance, OPENI supports fine-grained topology-aware job placement that can request for the GPU with a specific location (e.g., under the same PCI-E switch). | |||
- Based on kubernetes, the resource scheduling platform is developed to manage the task running environment in a mirror way, and the primary configuration is available everywhere; | |||
- For AI scenario design, task scheduling and start-up of AI scenario have certain particularity. For example, distributed tasks of PS worker architecture need to meet resource requests of at least two roles to start tasks. Otherwise, even start-up tasks will cause resource waste. Openi octopus has done a lot of design and Optimization for similar scenarios; | |||
- The plug-in design concept, based on the core business flow, provides extensibility through plug-in, and does not limit the plug-in development language; | |||
- It is easy to deploy. Openi Octopus supports rapid deployment in helm mode, and supports customized deployment of services; | |||
- Support heterogeneous hardware, such as GPU, NPU, FPGA, etc. because openi octopus is used to develop based on kubernetes, different heterogeneous hardware plug-ins can be customized; | |||
- Support a variety of deep learning frameworks such as tensorflow, pytorch, paddlepaddle, etc., and can easily support new frameworks by mirroring. | |||
OPENI embraces a [microservices](https://en.wikipedia.org/wiki/Microservices) architecture: every component runs in a container. | |||
The system leverages [Kubernetes](https://kubernetes.io/) to deploy and manage system service. | |||
The latest version of OPENI,the scheduling engine of more dynamic deep learning jobs also uses Kubernetes, | |||
which enables system services and deep learning jobs to be scheduled and managed using Kubernetes. | |||
The storage of training data and results can be customized according to platform/equipment requirements. | |||
Jobs logs are collected by [Filebeat](https://www.elastic.co/cn/products/beats/filebeat) and stored in [Elasticsearch](https://www.elastic.co/cn/products/elasticsearch) cluster. | |||
### Applicable Scenario | |||
- Build large-scale AI computing platform; | |||
- Want to share computing resources; | |||
- Hope to complete the model training in a unified environment; | |||
- We hope to use integrated plug-ins to assist model training and improve efficiency. | |||
## An Open AI Platform for R&D and Education | |||
@@ -55,9 +58,9 @@ And the system need [NTP](http://www.ntp.org/) service for clock synchronization | |||
### Deployment process | |||
To deploy and use the system, the process consists of the following steps. | |||
1. [Deploy Kubernetes 1.13 for OPENI-octopus system](https://www.processon.com/view/link/5d157ebae4b0a916e8f6bcc5) | |||
1. [Deploy Kubernetes for OPENI-octopus system](./deepops/README_zh.md) | |||
2. [Deploy OPENI-octopus system services](./install_openi_octopus.md) | |||
3. Access [web portal](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/web-portal) for job submission and cluster management | |||
3. Access [web portal](./web-portal/README.md) for job submission and cluster management | |||
#### Job management | |||
@@ -71,15 +74,15 @@ The web portal also provides Web UI for cluster management. | |||
## System Architecture | |||
<p style="text-align: left;"> | |||
<img src="./sysarch.png" title="System Architecture" alt="System Architecture" /> | |||
<img src="./sysarch.png" title="System Architecture" alt="System Architecture" width = 70% height = 70% /> | |||
</p> | |||
The system architecture is illustrated above. | |||
User submits jobs or monitors cluster status through the Web Portal, | |||
which calls APIs provided by the [REST server](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/rest-server). | |||
which calls APIs provided by the [REST server](./rest-server/README.md). | |||
Third party tools can also call REST server directly for job management. | |||
Upon receiving API calls, the REST server coordinates with k8s ApiServer, k8s Scheduler will schedule the job to k8s node with CPU,GPU and other resources. | |||
[FrameworkController](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/frameworkcontroller) will monitor the job life cycle in k8s cluster. | |||
[TaskSetController](./taskset/README.md) will monitor the job life cycle in k8s cluster. | |||
Restserver retrieve the status of jobs from k8s ApiServer, and its status can display on Web portal. | |||
Other type of CPU based AI workloads or traditional big data job | |||
can also run in the platform, coexisted with those GPU-based jobs. | |||
@@ -8,20 +8,23 @@ | |||
## 简介 | |||
OPENI是一个集群管理工具和资源调度平台,最初由 [微软研究院(MSR)](https://www.microsoft.com/en-us/research/group/systems-research-group-asia/),[微软搜索技术中心(STC)](https://www.microsoft.com/en-us/ard/company/introduction.aspx),[北京大学](http://eecs.pku.edu.cn/EN/),[西安交通大学](http://www.aiar.xjtu.edu.cn/),[浙江大学](http://www.cesc.zju.edu.cn/index_e.htm), 和[中国科学技术大学](http://eeis.ustc.edu.cn/) 联合设计并开发, 由 [鹏城实验室](http://www.pcl.ac.cn/)、[北京大学](http://idm.pku.edu.cn/) 、[中国科学技术大学](https://www.ustc.edu.cn/)和 [AITISA](http://www.aitisa.org.cn/) 进行维护。 | |||
该平台结合了一些在微软大规模生产环境中表现良好的成熟设计,主要为学术研究而量身打造。 | |||
OpenI-Octopus是一个集群管理工具和资源调度平台,由北京大学,西安交通大学,浙江大学和中国科学技术大学联合设计并开发, 由鹏城实验室、北京大学、中国科学技术大学和 AITISA 进行维护。 该平台结合了一些在大规模生产环境中表现良好的成熟设计,主要为提升学术研究效率,复现学术研究成果而量身打造。 | |||
OPENI支持在GPU集群中运行AI任务作业(比如深度学习任务作业)。平台提供了一系列接口,能够支持主流的深度学习框架,如CNTK, TensorFlow等。这些接口同时具有强大的可扩展性:添加一些额外的脚本或者Python代码后,平台即可支持新的深度学习框架(或者其他类型的工作)。 | |||
### 特点 | |||
- 基于Kubernetes开发资源调度平台,以镜像方式管理任务运行环境,一次配置随处可用; | |||
- 针对AI场景设计,AI场景的任务调度和任务启动有一定特殊性,如PS-Worker架构的分布式任务,需要至少满足两个角色的资源请求才能启动任务,否则即使启动任务也会造成资源浪费,而OpenI-Octopus针对类似场景做了很多设计和优化; | |||
- 插件式设计理念,以核心的业务流为基础,通过插件化的方式提供扩展性,不限制插件开发语言; | |||
- 易于部署,OpenI-Octopus支持helm方式的快速部署,同时支持服务的自定义部署; | |||
- 支持异构硬件,如GPU、NPU、FPGA等,由于采用OpenI-Octopus基于Kubernetes开发,可自定义不同异构硬件插件; | |||
- 支持多种深度学习框架,如 tensorflow、pytorch、paddlepaddle等,并通过镜像方式可方便的支持新增框架。 | |||
作为深度学习中非常重要的一项要求,OPENI支持GPU调度。 | |||
为了能得到更好的性能,OPENI支持细粒度的拓扑感知任务部署,可以获取到指定位置的GPU(比如获取在相同的PCI-E交换机下的GPU)。 | |||
### 适用场景 | |||
启智采用[microservices](https://en.wikipedia.org/wiki/Microservices) 结构:每一个组件都在一个容器中运行。 | |||
平台利用[Kubernetes](https://kubernetes.io/) 来部署和管理系统服务。 | |||
平台的最新版本,动态的深度学习任务的调度引擎也使用Kubernetes,使得系统服务和深度学习任务都使用Kubernetes进行调度和管理。 | |||
训练数据和训练结果储存可根据平台/设备需求自定义。任务日志采用[Filebeat](https://www.elastic.co/cn/products/beats/filebeat)收集, | |||
[Elasticsearch](https://www.elastic.co/cn/products/elasticsearch)集群存储。 | |||
- 构建大规模AI计算平台; | |||
- 希望共享计算资源; | |||
- 希望在统一的环境下完成模型训练; | |||
- 希望使用集成的插件辅助模型训练,提升效率。 | |||
## 用于研发及教育的开源AI平台 | |||
@@ -51,9 +54,9 @@ OPENI以开源的模式运营:来自学术和工业界的贡献我们都非常 | |||
执行以下几个步骤来部署和使用本系统。 | |||
1. [部署适配OpenI章鱼系统的kubernetes 1.13](https://www.processon.com/view/link/5d157ebae4b0a916e8f6bcc5) | |||
1. [部署适配OpenI章鱼系统的kubernetes](./deepops/README_zh.md) | |||
2. [部署OpenI章鱼系统服务](./install_openi_octopus_zh.md) | |||
3. 访问[web门户页面](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/web-portal) 进行任务提交和集群管理 | |||
3. 访问[web门户页面](./web-portal/README.zh-CN.md) 进行任务提交和集群管理 | |||
#### 作业管理 | |||
@@ -67,12 +70,12 @@ Web门户上也提供了Web UI进行集群的管理。 | |||
## 系统结构 | |||
<p style="text-align: left;"> | |||
<img src="./sysarch.png" title="System Architecture" alt="System Architecture" /> | |||
<img src="./sysarch.png" title="System Architecture" alt="System Architecture" width = 70% height = 70% /> | |||
</p> | |||
系统的整体结构如上图所示。 | |||
用户通过Web门户提交了任务作业或集群状态监视的申请,该操作会调用[Restserver服务](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/rest-server)提供的API。 | |||
用户通过Web门户提交了任务作业或集群状态监视的申请,该操作会调用[Restserver服务](./rest-server/README.zh-CN.md)提供的API。 | |||
第三方工具也可以直接调用Restserver服务进行作业管理。收到API调用后,Restserver服务会将任务作业提交到k8s ApiServer,k8s的调度引擎负责对任务作业进行调度,调度完成后任务就可以使用集群节点中的GPU资源进行深度学习运算。 | |||
[FrameworkController服务](https://github.com/open-intelligence/OpenI-Octopus/tree/k8s/frameworkcontroller)负责监控任务作业在K8s集群中的生命周期。Restserver服务向k8s ApiServer获取任务的状态,并且Web网页可以展示在界面上。 | |||
[TaskSetController服务](./taskset/README.md)负责监控任务作业在K8s集群中的生命周期。Rest-Server服务向k8s ApiServer获取任务的状态,并且Web网页可以展示在界面上。 | |||
其他基于CPU的AI工作或者传统的大数据任务作业也可以在平台上运行,与那些基于GPU的作业共存。平台训练数据和训练结果储存可根据平台/设备需求自定义。 | |||
@@ -0,0 +1,27 @@ | |||
# Private Charts Repository | |||
本项目为内部的helm charts仓库 | |||
## 结构 | |||
根目录下的每个目录为chart最小单元,也就是一个chart包 | |||
每个chart基本结构如下: | |||
```angular2html | |||
/exampleChart | |||
/charts/ 子chart依赖存放目录 | |||
/template/ 资源模板存在目录 | |||
/.helmignore 打包时忽略文件 | |||
/Chart.yaml 包信息文件 | |||
/requirements.yaml 子chart依赖文件 | |||
/values.yaml 值文件 | |||
``` | |||
## 开发 | |||
基于helm cli生成的chart模块开发 | |||
```sh | |||
$ helm create exampleChart | |||
``` | |||
@@ -0,0 +1,22 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj | |||
*.ign.yaml |
@@ -0,0 +1,21 @@ | |||
apiVersion: v1 | |||
name: frameworkcontroller | |||
description: General-Purpose Kubernetes Pod Controller | |||
# A chart can be either an 'application' or a 'library' chart. | |||
# | |||
# Application charts are a collection of templates that can be packaged into versioned archives | |||
# to be deployed. | |||
# | |||
# Library charts provide useful utilities or functions for the chart developer. They're included as | |||
# a dependency of application charts to inject those utilities and functions into the rendering | |||
# pipeline. Library charts do not define any templates and therefore cannot be deployed. | |||
type: application | |||
# This is the chart version. This version number should be incremented each time you make changes | |||
# to the chart and its templates, including the app version. | |||
version: 0.1.0 | |||
# This is the version number of the application being deployed. This version number should be | |||
# incremented each time you make changes to the application. | |||
appVersion: 1.16.0 |
@@ -0,0 +1,6 @@ | |||
1. Get the application URL by running these commands: | |||
{{- if .Values.ingress.enabled }} | |||
{{- range .Values.ingress.hosts }} | |||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} | |||
{{- end }} | |||
{{- end }} |
@@ -0,0 +1,53 @@ | |||
{{/* vim: set filetype=mustache: */}} | |||
{{/* | |||
Expand the name of the chart. | |||
*/}} | |||
{{- define "frameworkcontroller.name" -}} | |||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Create a default fully qualified app name. | |||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | |||
If release name contains chart name it will be used as a full name. | |||
*/}} | |||
{{- define "frameworkcontroller.fullname" -}} | |||
{{- if .Values.fullnameOverride -}} | |||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- $name := default .Chart.Name .Values.nameOverride -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{/* | |||
Create chart name and version as used by the chart label. | |||
*/}} | |||
{{- define "frameworkcontroller.chart" -}} | |||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Common labels | |||
*/}} | |||
{{- define "frameworkcontroller.labels" -}} | |||
helm.sh/chart: {{ include "frameworkcontroller.chart" . }} | |||
{{- if .Chart.AppVersion }} | |||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} | |||
{{- end }} | |||
app.kubernetes.io/managed-by: {{ .Release.Service }} | |||
{{ include "frameworkcontroller.select-labels" . }} | |||
{{- end -}} | |||
{{/* | |||
Common select labels | |||
*/}} | |||
{{- define "frameworkcontroller.select-labels" -}} | |||
app.kubernetes.io/name: {{ include "frameworkcontroller.name" . }} | |||
app.kubernetes.io/instance: {{ include "frameworkcontroller.fullname" . }} | |||
app.kubernetes.io/part-of: {{ include "frameworkcontroller.name" . }} | |||
{{- end -}} |
@@ -0,0 +1,59 @@ | |||
apiVersion: v1 | |||
kind: ServiceAccount | |||
metadata: | |||
name: {{ template "frameworkcontroller.fullname" . }} | |||
--- | |||
kind: ClusterRoleBinding | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
metadata: | |||
name: {{ template "frameworkcontroller.fullname" . }} | |||
subjects: | |||
- kind: ServiceAccount | |||
name: {{ template "frameworkcontroller.fullname" . }} | |||
# TO BE UPDATED IF NEEDED | |||
namespace: {{ .Release.Namespace }} | |||
roleRef: | |||
kind: ClusterRole | |||
name: cluster-admin | |||
apiGroup: rbac.authorization.k8s.io | |||
--- | |||
apiVersion: v1 | |||
kind: ServiceAccount | |||
metadata: | |||
name: frameworkbarrier | |||
--- | |||
kind: ClusterRoleBinding | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
metadata: | |||
name: {{ template "frameworkcontroller.fullname" . }}-frameworkbarrier | |||
subjects: | |||
- kind: ServiceAccount | |||
name: frameworkbarrier | |||
# TO BE UPDATED IF NEEDED | |||
namespace: {{ .Release.Namespace }} | |||
roleRef: | |||
kind: ClusterRole | |||
name: {{ template "frameworkcontroller.fullname" . }}-frameworkbarrier | |||
apiGroup: rbac.authorization.k8s.io | |||
--- | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
kind: ClusterRole | |||
metadata: | |||
name: {{ template "frameworkcontroller.fullname" . }}-frameworkbarrier | |||
rules: | |||
- apiGroups: | |||
- '*' | |||
resources: | |||
- 'frameworks' | |||
verbs: | |||
- get | |||
- watch | |||
- list |
@@ -0,0 +1,34 @@ | |||
apiVersion: apps/v1 | |||
kind: StatefulSet | |||
metadata: | |||
name: {{ template "frameworkcontroller.fullname" . }} | |||
spec: | |||
serviceName: {{ template "frameworkcontroller.fullname" . }} | |||
selector: | |||
matchLabels: | |||
{{ include "frameworkcontroller.select-labels" . | indent 8 }} | |||
replicas: {{ .Values.replicas }} | |||
template: | |||
metadata: | |||
labels: | |||
{{ include "frameworkcontroller.labels" . | indent 8 }} | |||
spec: | |||
serviceAccountName: {{ template "frameworkcontroller.fullname" . }} | |||
containers: | |||
- name: {{ .Chart.Name }} | |||
image: "{{ .Values.global.image.repository.address }}{{ .Values.global.image.repository.pathname }}/{{ .Values.image.name }}:{{ .Values.image.tag }}" | |||
imagePullPolicy: {{ .Values.global.image.pullPolicy }} | |||
resources: | |||
{{ toYaml .Values.resources | indent 10 }} | |||
{{- with .Values.global.nodeSelector }} | |||
nodeSelector: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.affinity }} | |||
affinity: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.tolerations }} | |||
tolerations: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} |
@@ -0,0 +1,38 @@ | |||
# Default values for frameworkcontroller. | |||
# This is a YAML-formatted file. | |||
# Declare variables to be passed into your templates. | |||
replicaCount: 1 | |||
global: | |||
image: | |||
repository: | |||
address: "" | |||
pathname: "openi" | |||
pullPolicy: Always | |||
nodeSelector: {} | |||
image: | |||
name: "frameworkcontroller" | |||
tag: "latest" | |||
ingress: | |||
enabled: false | |||
resources: {} | |||
# We usually recommend not to specify default resources and to leave this as a conscious | |||
# choice for the user. This also increases chances charts run on environments with little | |||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | |||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. | |||
# limits: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
tolerations: [] | |||
affinity: {} |
@@ -0,0 +1,22 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj | |||
*.ign.yaml |
@@ -0,0 +1,21 @@ | |||
apiVersion: v1 | |||
name: grafana | |||
description: Resource Monitoring View | |||
# A chart can be either an 'application' or a 'library' chart. | |||
# | |||
# Application charts are a collection of templates that can be packaged into versioned archives | |||
# to be deployed. | |||
# | |||
# Library charts provide useful utilities or functions for the chart developer. They're included as | |||
# a dependency of application charts to inject those utilities and functions into the rendering | |||
# pipeline. Library charts do not define any templates and therefore cannot be deployed. | |||
type: application | |||
# This is the chart version. This version number should be incremented each time you make changes | |||
# to the chart and its templates, including the app version. | |||
version: 0.1.0 | |||
# This is the version number of the application being deployed. This version number should be | |||
# incremented each time you make changes to the application. | |||
appVersion: 1.16.0 |
@@ -0,0 +1,19 @@ | |||
1. Get the application URL by running these commands: | |||
{{- if .Values.ingress.enabled }} | |||
{{- range .Values.ingress.hosts }} | |||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} | |||
{{- end }} | |||
{{- else if contains "NodePort" .Values.service.type }} | |||
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "grafana.fullname" . }}) | |||
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}") | |||
echo http://$NODE_IP:$NODE_PORT | |||
{{- else if contains "LoadBalancer" .Values.service.type }} | |||
NOTE: It may take a few minutes for the LoadBalancer IP to be available. | |||
You can watch the status of by running 'kubectl get svc -w {{ template "grafana.fullname" . }}' | |||
export SERVICE_IP=$(kubectl get svc {{ template "grafana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') | |||
echo http://$SERVICE_IP:{{ .Values.service.port }} | |||
{{- else if contains "ClusterIP" .Values.service.type }} | |||
export POD_NAME=$(kubectl get pods -l "app={{ template "grafana.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | |||
echo "Visit http://127.0.0.1:8080 to use your application" | |||
kubectl port-forward $POD_NAME 8080:80 | |||
{{- end }} |
@@ -0,0 +1,66 @@ | |||
{{/* vim: set filetype=mustache: */}} | |||
{{/* | |||
Expand the name of the chart. | |||
*/}} | |||
{{- define "grafana.name" -}} | |||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Create a default fully qualified app name. | |||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | |||
If release name contains chart name it will be used as a full name. | |||
*/}} | |||
{{- define "grafana.fullname" -}} | |||
{{- if .Values.fullnameOverride -}} | |||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- $name := default .Chart.Name .Values.nameOverride -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{/* | |||
Create chart name and version as used by the chart label. | |||
*/}} | |||
{{- define "grafana.chart" -}} | |||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Common labels | |||
*/}} | |||
{{- define "grafana.labels" -}} | |||
helm.sh/chart: {{ include "grafana.chart" . }} | |||
{{- if .Chart.AppVersion }} | |||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} | |||
{{- end }} | |||
app.kubernetes.io/managed-by: {{ .Release.Service }} | |||
{{ include "grafana.select-labels" . }} | |||
{{- end -}} | |||
{{/* | |||
Common select labels | |||
*/}} | |||
{{- define "grafana.select-labels" -}} | |||
app.kubernetes.io/name: {{ include "grafana.name" . }} | |||
app.kubernetes.io/instance: {{ include "grafana.fullname" . }} | |||
app.kubernetes.io/part-of: {{ include "grafana.name" . }} | |||
{{- end -}} | |||
{{- define "prometheus.address" -}} | |||
{{- if .Values.prometheus.customAddr -}} | |||
{{- .Values.prometheus.customAddr -}} | |||
{{- else -}} | |||
{{- $name := .Values.prometheus.host -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- printf "%s://%s.%s:%s" .Values.prometheus.protocol .Release.Name .Release.Namespace .Values.prometheus.port -}} | |||
{{- else -}} | |||
{{- printf "%s://%s-%s.%s:%s" .Values.prometheus.protocol .Release.Name .Values.prometheus.host .Release.Namespace .Values.prometheus.port -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} |
@@ -0,0 +1,11 @@ | |||
apiVersion: v1 | |||
kind: ConfigMap | |||
metadata: | |||
name: {{ template "grafana.fullname" . }}-dashboards | |||
data: | |||
{{- if .Values.exts }} | |||
{{- range .Values.exts }} | |||
{{ .name }}: | | |||
{{ .context }} | |||
{{- end }} | |||
{{- end }} |
@@ -0,0 +1,18 @@ | |||
apiVersion: v1 | |||
kind: ConfigMap | |||
metadata: | |||
name: {{ template "grafana.fullname" . }}-dashboard-providers | |||
labels: | |||
grafana_dashboard: "true" | |||
data: | |||
datasource.yml: |- | |||
apiVersion: 1 | |||
providers: | |||
- name: 'default' | |||
orgId: 1 | |||
folder: '' | |||
type: file | |||
disableDeletion: false | |||
updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards | |||
options: | |||
path: /var/lib/grafana/dashboards/ |
@@ -0,0 +1,16 @@ | |||
apiVersion: v1 | |||
kind: ConfigMap | |||
metadata: | |||
name: {{ template "grafana.fullname" . }}-datasource | |||
labels: | |||
grafana_datasource: "true" | |||
data: | |||
datasource.yml: |- | |||
apiVersion: 1 | |||
datasources: | |||
- name: Prometheus | |||
type: prometheus | |||
access: proxy | |||
orgId: 1 | |||
url: {{ template "prometheus.address" . }} | |||
isDefault: true |
@@ -0,0 +1,67 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
metadata: | |||
name: {{ template "grafana.fullname" . }} | |||
labels: | |||
{{ include "grafana.labels" . | indent 4 }} | |||
spec: | |||
replicas: {{ .Values.replicaCount }} | |||
selector: | |||
matchLabels: | |||
{{ include "grafana.select-labels" . | indent 8 }} | |||
template: | |||
metadata: | |||
labels: | |||
{{ include "grafana.labels" . | indent 8 }} | |||
spec: | |||
{{- with .Values.global.nodeSelector }} | |||
nodeSelector: | |||
{{ toYaml . | indent 2 }} | |||
{{- end }} | |||
containers: | |||
- image: grafana/grafana:{{ .Values.image.tag }} | |||
name: grafana | |||
# env: | |||
resources: | |||
{{ toYaml .Values.resources | indent 10 }} | |||
env: | |||
# This variable is required to setup templates in Grafana. | |||
# The following env variables are required to make Grafana accessible via | |||
# the kubernetes api-server proxy. On production clusters, we recommend | |||
# removing these env variables, setup auth for grafana, and expose the grafana | |||
# service using a LoadBalancer or a public IP. | |||
- name: GF_AUTH_BASIC_ENABLED | |||
value: "{{ .Values.env.GF_AUTH_BASIC_ENABLED }}" | |||
- name: GF_AUTH_ANONYMOUS_ENABLED | |||
value: "{{ .Values.env.GF_AUTH_ANONYMOUS_ENABLED }}" | |||
- name: GF_AUTH_ANONYMOUS_ORG_ROLE | |||
value: "{{ .Values.env.GF_AUTH_ANONYMOUS_ORG_ROLE }}" | |||
- name: GF_SECURITY_ADMIN_USER | |||
value: "{{ .Values.env.GF_SECURITY_ADMIN_USER }}" | |||
- name: GF_SECURITY_ADMIN_PASSWORD | |||
value: "{{ .Values.env.GF_SECURITY_ADMIN_PASSWORD }}" | |||
- name: GF_SECURITY_ALLOW_EMBEDDING | |||
value: "{{ .Values.env.GF_SECURITY_ALLOW_EMBEDDING }}" | |||
- name: GF_SERVER_ROOT_URL | |||
value: "{{ .Values.env.GF_SERVER_ROOT_URL }}" | |||
volumeMounts: | |||
- name: grafana-persistent-storage | |||
mountPath: /var/lib/grafana | |||
- name: datasource | |||
mountPath: /etc/grafana/provisioning/datasources/ | |||
- name: dashboard-providers | |||
mountPath: /etc/grafana/provisioning/dashboards | |||
- name: dashboards | |||
mountPath: /var/lib/grafana/dashboards | |||
volumes: | |||
- name: grafana-persistent-storage | |||
emptyDir: {} | |||
- name: datasource | |||
configMap: | |||
name: '{{ template "grafana.fullname" . }}-datasource' | |||
- name: dashboard-providers | |||
configMap: | |||
name: '{{ template "grafana.fullname" . }}-dashboard-providers' | |||
- name: dashboards | |||
configMap: | |||
name: '{{ template "grafana.fullname" . }}-dashboards' |
@@ -0,0 +1,17 @@ | |||
{{- if .Values.ingress.enabled -}} | |||
{{- $ingressPath := .Values.ingress.path -}} | |||
apiVersion: extensions/v1beta1 | |||
kind: Ingress | |||
metadata: | |||
name: {{ template "grafana.fullname" . }} | |||
annotations: | |||
nginx.ingress.kubernetes.io/rewrite-target: /$1 | |||
spec: | |||
rules: | |||
- http: | |||
paths: | |||
- path: {{ $ingressPath }}/(.*) | |||
backend: | |||
serviceName: {{ template "grafana.fullname" . }} | |||
servicePort: {{ .Values.service.port }} | |||
{{- end }} |
@@ -0,0 +1,13 @@ | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: {{ template "grafana.fullname" . }} | |||
labels: | |||
{{ include "grafana.labels" . | indent 4 }} | |||
spec: | |||
type: {{ .Values.service.type }} | |||
ports: | |||
- port: {{ .Values.service.port }} | |||
targetPort: {{ .Values.service.targetPort }} | |||
selector: | |||
{{ include "grafana.select-labels" . | indent 6 }} |
@@ -0,0 +1,22 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj | |||
*.ign.yaml |
@@ -0,0 +1,21 @@ | |||
apiVersion: v1 | |||
name: image-factory | |||
description: Factory that generate images | |||
# A chart can be either an 'application' or a 'library' chart. | |||
# | |||
# Application charts are a collection of templates that can be packaged into versioned archives | |||
# to be deployed. | |||
# | |||
# Library charts provide useful utilities or functions for the chart developer. They're included as | |||
# a dependency of application charts to inject those utilities and functions into the rendering | |||
# pipeline. Library charts do not define any templates and therefore cannot be deployed. | |||
type: application | |||
# This is the chart version. This version number should be incremented each time you make changes | |||
# to the chart and its templates, including the app version. | |||
version: 0.1.0 | |||
# This is the version number of the application being deployed. This version number should be | |||
# incremented each time you make changes to the application. | |||
appVersion: 1.16.0 |
@@ -0,0 +1,19 @@ | |||
1. Get the application URL by running these commands: | |||
{{- if .Values.ingress.enabled }} | |||
{{- range .Values.ingress.hosts }} | |||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} | |||
{{- end }} | |||
{{- else if contains "NodePort" .Values.agent.service.type }} | |||
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "image-factory-agent.fullname" . }}) | |||
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}") | |||
echo http://$NODE_IP:$NODE_PORT | |||
{{- else if contains "LoadBalancer" .Values.agent.service.type }} | |||
NOTE: It may take a few minutes for the LoadBalancer IP to be available. | |||
You can watch the status of by running 'kubectl get svc -w {{ template "image-factory-agent.fullname" . }}' | |||
export SERVICE_IP=$(kubectl get svc {{ template "image-factory-agent.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') | |||
echo http://$SERVICE_IP:{{ .Values.agent.service.port }} | |||
{{- else if contains "ClusterIP" .Values.agent.service.type }} | |||
export POD_NAME=$(kubectl get pods -l "app={{ template "image-factory-agent.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | |||
echo "Visit http://127.0.0.1:8080 to use your application" | |||
kubectl port-forward $POD_NAME 8080:80 | |||
{{- end }} |
@@ -0,0 +1,89 @@ | |||
{{/* vim: set filetype=mustache: */}} | |||
{{/* | |||
Expand the name of the chart. | |||
*/}} | |||
{{- define "image-factory.name" -}} | |||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- define "image-factory-agent.name" -}} | |||
{{- "agent" -}} | |||
{{- end -}} | |||
{{- define "image-factory-shield.name" -}} | |||
{{- "shield" -}} | |||
{{- end -}} | |||
{{/* | |||
Create a default fully qualified app name. | |||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | |||
If release name contains chart name it will be used as a full name. | |||
*/}} | |||
{{- define "image-factory.fullname" -}} | |||
{{- if .Values.fullnameOverride -}} | |||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- $name := default .Chart.Name .Values.nameOverride -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- define "image-factory-agent.fullname" -}} | |||
{{- printf "%s-%s" (include "image-factory.fullname" .) (include "image-factory-agent.name" .) | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- define "image-factory-shield.fullname" -}} | |||
{{- printf "%s-%s" (include "image-factory.fullname" .) (include "image-factory-shield.name" .) | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Create chart name and version as used by the chart label. | |||
*/}} | |||
{{- define "image-factory.chart" -}} | |||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Common labels | |||
*/}} | |||
{{- define "image-factory.labels" -}} | |||
helm.sh/chart: {{ include "image-factory.chart" . }} | |||
{{- if .Chart.AppVersion }} | |||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} | |||
{{- end }} | |||
app.kubernetes.io/managed-by: {{ .Release.Service }} | |||
{{- end -}} | |||
{{/* | |||
Common select labels | |||
*/}} | |||
{{- define "image-factory.select-labels" -}} | |||
app.kubernetes.io/part-of: {{ include "image-factory.name" . }} | |||
{{- end -}} | |||
{{- define "image-factory-agent.select-labels" -}} | |||
app.kubernetes.io/name: {{ include "image-factory-agent.name" . }} | |||
app.kubernetes.io/instance: {{ include "image-factory-agent.fullname" . }} | |||
{{ include "image-factory.select-labels" . }} | |||
{{- end -}} | |||
{{- define "image-factory-shield.select-labels" -}} | |||
app.kubernetes.io/name: {{ include "image-factory-shield.name" . }} | |||
app.kubernetes.io/instance: {{ include "image-factory-shield.fullname" . }} | |||
{{ include "image-factory.select-labels" . }} | |||
{{- end -}} | |||
{{- define "image-factory-agent.labels" -}} | |||
{{ include "image-factory.labels" . }} | |||
{{ include "image-factory-agent.select-labels" . }} | |||
{{- end -}} | |||
{{- define "image-factory-shield.labels" -}} | |||
{{ include "image-factory.labels" . }} | |||
{{ include "image-factory-shield.select-labels" . }} | |||
{{- end -}} |
@@ -0,0 +1,59 @@ | |||
apiVersion: apps/v1 | |||
kind: DaemonSet | |||
metadata: | |||
name: {{ template "image-factory-agent.fullname" . }} | |||
labels: | |||
{{ include "image-factory-agent.labels" . | indent 4 }} | |||
spec: | |||
selector: | |||
matchLabels: | |||
{{ include "image-factory-agent.select-labels" . | indent 8 }} | |||
template: | |||
metadata: | |||
labels: | |||
{{ include "image-factory-agent.labels" . | indent 8 }} | |||
spec: | |||
dnsPolicy: ClusterFirstWithHostNet | |||
hostNetwork: true | |||
hostPID: true | |||
{{- if .Values.agent.volumes }} | |||
volumes: | |||
{{- range .Values.agent.volumes }} | |||
- name: {{ .name }} | |||
hostPath: | |||
path: {{ .hostPath }} | |||
{{- end }} | |||
{{- end }} | |||
containers: | |||
- name: {{ template "image-factory-agent.name" . }} | |||
image: "{{ .Values.global.image.repository.address }}{{ .Values.global.image.repository.pathname }}/{{ .Values.agent.image.name }}:{{ .Values.agent.image.tag }}" | |||
imagePullPolicy: {{ .Values.global.image.pullPolicy }} | |||
ports: | |||
- name: http | |||
containerPort: {{ .Values.agent.service.targetPort }} | |||
hostPort: {{ .Values.agent.service.targetPort }} | |||
protocol: TCP | |||
resources: | |||
{{ toYaml .Values.agent.resources | indent 12 }} | |||
env: | |||
- name: SHIELD_ADDRESS | |||
value: 'http://{{ template "image-factory-shield.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.shield.service.port }}' | |||
{{- if .Values.agent.volumes }} | |||
volumeMounts: | |||
{{- range .Values.agent.volumes }} | |||
- mountPath: {{ .mountPath }} | |||
name: {{ .name }} | |||
{{- end }} | |||
{{- end }} | |||
{{- with .Values.agent.nodeSelector }} | |||
nodeSelector: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.agent.affinity }} | |||
affinity: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.agent.tolerations }} | |||
tolerations: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} |
@@ -0,0 +1,38 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
metadata: | |||
name: {{ template "image-factory-shield.fullname" . }} | |||
labels: | |||
{{ include "image-factory-shield.labels" . | indent 4 }} | |||
spec: | |||
replicas: {{ .Values.agent.replicaCount }} | |||
selector: | |||
matchLabels: | |||
{{ include "image-factory-shield.select-labels" . | indent 8 }} | |||
template: | |||
metadata: | |||
labels: | |||
{{ include "image-factory-shield.labels" . | indent 8 }} | |||
spec: | |||
containers: | |||
- name: {{ template "image-factory-shield.name" . }} | |||
image: "{{ .Values.global.image.repository.address }}{{ .Values.global.image.repository.pathname }}/{{ .Values.shield.image.name }}:{{ .Values.shield.image.tag }}" | |||
imagePullPolicy: {{ .Values.global.image.pullPolicy }} | |||
ports: | |||
- name: http | |||
containerPort: {{ .Values.shield.service.targetPort }} | |||
protocol: TCP | |||
resources: | |||
{{ toYaml .Values.shield.resources | indent 12 }} | |||
{{- with .Values.global.nodeSelector }} | |||
nodeSelector: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.shield.affinity }} | |||
affinity: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.shield.tolerations }} | |||
tolerations: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} |
@@ -0,0 +1,15 @@ | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: {{ template "image-factory-shield.fullname" . }} | |||
labels: | |||
{{ include "image-factory-shield.labels" . | indent 4 }} | |||
spec: | |||
type: {{ .Values.shield.service.type }} | |||
ports: | |||
- port: {{ .Values.shield.service.port }} | |||
targetPort: http | |||
protocol: TCP | |||
name: http | |||
selector: | |||
{{ include "image-factory-shield.select-labels" . | indent 4 }} |
@@ -0,0 +1,70 @@ | |||
# Default values for image-factory. | |||
# This is a YAML-formatted file. | |||
# Declare variables to be passed into your templates. | |||
nameOverride: "" | |||
fullnameOverride: "" | |||
global: | |||
image: | |||
repository: | |||
address: "" | |||
pathname: "openi" | |||
pullPolicy: Always | |||
nodeSelector: {} | |||
agent: | |||
image: | |||
name: "image-factory-agent" | |||
tag: "latest" | |||
service: | |||
type: ClusterIP | |||
port: 9002 | |||
targetPort: 9002 | |||
volumes: | |||
- name: docker-run | |||
mountPath: /var/run | |||
hostPath: /var/run | |||
- name: docker | |||
mountPath: /var/lib/docker | |||
hostPath: /var/lib/docker | |||
resources: {} | |||
tolerations: [] | |||
affinity: {} | |||
nodeSelector: {} | |||
shield: | |||
replicaCount: 1 | |||
image: | |||
name: "image-factory-shield" | |||
tag: "latest" | |||
service: | |||
type: ClusterIP | |||
port: 80 | |||
targetPort: 9001 | |||
resources: {} | |||
tolerations: [] | |||
affinity: {} | |||
ingress: | |||
enabled: false | |||
annotations: {} | |||
# kubernetes.io/ingress.class: nginx | |||
# kubernetes.io/tls-acme: "true" | |||
path: / | |||
tls: [] | |||
# - secretName: chart-example-tls | |||
# hosts: | |||
# - chart-example.local | |||
# We usually recommend not to specify default resources and to leave this as a conscious | |||
# choice for the user. This also increases chances charts run on environments with little | |||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | |||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. | |||
# limits: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi |
@@ -0,0 +1,21 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj |
@@ -0,0 +1,21 @@ | |||
apiVersion: v1 | |||
name: ingress-nginx | |||
description: Default Ingress Nginx for Octopus | |||
# A chart can be either an 'application' or a 'library' chart. | |||
# | |||
# Application charts are a collection of templates that can be packaged into versioned archives | |||
# to be deployed. | |||
# | |||
# Library charts provide useful utilities or functions for the chart developer. They're included as | |||
# a dependency of application charts to inject those utilities and functions into the rendering | |||
# pipeline. Library charts do not define any templates and therefore cannot be deployed. | |||
type: application | |||
# This is the chart version. This version number should be incremented each time you make changes | |||
# to the chart and its templates, including the app version. | |||
version: 0.1.0 | |||
# This is the version number of the application being deployed. This version number should be | |||
# incremented each time you make changes to the application. | |||
appVersion: 1.16.0 |
@@ -0,0 +1,19 @@ | |||
1. Get the application URL by running these commands: | |||
{{- if .Values.ingress.enabled }} | |||
{{- range .Values.ingress.hosts }} | |||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} | |||
{{- end }} | |||
{{- else if contains "NodePort" .Values.service.type }} | |||
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "ingress-nginx.fullname" . }}) | |||
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}") | |||
echo http://$NODE_IP:$NODE_PORT | |||
{{- else if contains "LoadBalancer" .Values.service.type }} | |||
NOTE: It may take a few minutes for the LoadBalancer IP to be available. | |||
You can watch the status of by running 'kubectl get svc -w {{ template "ingress-nginx.fullname" . }}' | |||
export SERVICE_IP=$(kubectl get svc {{ template "ingress-nginx.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') | |||
echo http://$SERVICE_IP:{{ .Values.service.port }} | |||
{{- else if contains "ClusterIP" .Values.service.type }} | |||
export POD_NAME=$(kubectl get pods -l "app={{ template "ingress-nginx.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | |||
echo "Visit http://127.0.0.1:8080 to use your application" | |||
kubectl port-forward $POD_NAME 8080:80 | |||
{{- end }} |
@@ -0,0 +1,45 @@ | |||
{{/* vim: set filetype=mustache: */}} | |||
{{/* | |||
Expand the name of the chart. | |||
*/}} | |||
{{- define "ingress-nginx.name" -}} | |||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Create a default fully qualified app name. | |||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | |||
If release name contains chart name it will be used as a full name. | |||
*/}} | |||
{{- define "ingress-nginx.fullname" -}} | |||
{{- if .Values.fullnameOverride -}} | |||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- $name := default .Chart.Name .Values.nameOverride -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{/* | |||
Create chart name and version as used by the chart label. | |||
*/}} | |||
{{- define "ingress-nginx.chart" -}} | |||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Common labels | |||
*/}} | |||
{{- define "ingress-nginx.labels" -}} | |||
app.kubernetes.io/name: {{ include "ingress-nginx.name" . }} | |||
helm.sh/chart: {{ include "ingress-nginx.chart" . }} | |||
app.kubernetes.io/instance: {{ .Release.Name }} | |||
{{- if .Chart.AppVersion }} | |||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} | |||
{{- end }} | |||
app.kubernetes.io/managed-by: {{ .Release.Service }} | |||
{{- end -}} |
@@ -0,0 +1,31 @@ | |||
kind: ConfigMap | |||
apiVersion: v1 | |||
metadata: | |||
name: nginx-configuration | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
data: | |||
hsts: "false" # 关闭强制https https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ | |||
ssl-redirect: "false" # https://kubernetes.github.io/ingress-nginx/user-guide/tls/ | |||
--- | |||
kind: ConfigMap | |||
apiVersion: v1 | |||
metadata: | |||
name: tcp-services | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
--- | |||
kind: ConfigMap | |||
apiVersion: v1 | |||
metadata: | |||
name: udp-services | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx |
@@ -0,0 +1,79 @@ | |||
apiVersion: apps/v1 | |||
kind: DaemonSet | |||
metadata: | |||
name: nginx-ingress-daemonset | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
spec: | |||
selector: | |||
matchLabels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
template: | |||
metadata: | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
annotations: | |||
prometheus.io/port: "10254" | |||
prometheus.io/scrape: "true" | |||
spec: | |||
serviceAccountName: nginx-ingress-serviceaccount | |||
containers: | |||
- name: nginx-ingress-controller | |||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1 | |||
args: | |||
- /nginx-ingress-controller | |||
- --configmap=$(POD_NAMESPACE)/nginx-configuration | |||
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services | |||
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services | |||
- --publish-service=$(POD_NAMESPACE)/ingress-nginx | |||
- --annotations-prefix=nginx.ingress.kubernetes.io | |||
securityContext: | |||
allowPrivilegeEscalation: true | |||
capabilities: | |||
drop: | |||
- ALL | |||
add: | |||
- NET_BIND_SERVICE | |||
# www-data -> 33 | |||
runAsUser: 33 | |||
env: | |||
- name: POD_NAME | |||
valueFrom: | |||
fieldRef: | |||
fieldPath: metadata.name | |||
- name: POD_NAMESPACE | |||
valueFrom: | |||
fieldRef: | |||
fieldPath: metadata.namespace | |||
ports: | |||
- name: http | |||
containerPort: 80 | |||
hostPort: 80 | |||
- name: https | |||
containerPort: 443 | |||
hostPort: 443 | |||
livenessProbe: | |||
failureThreshold: 3 | |||
httpGet: | |||
path: /healthz | |||
port: 10254 | |||
scheme: HTTP | |||
initialDelaySeconds: 10 | |||
periodSeconds: 10 | |||
successThreshold: 1 | |||
timeoutSeconds: 10 | |||
readinessProbe: | |||
failureThreshold: 3 | |||
httpGet: | |||
path: /healthz | |||
port: 10254 | |||
scheme: HTTP | |||
periodSeconds: 10 | |||
successThreshold: 1 | |||
timeoutSeconds: 10 | |||
@@ -0,0 +1,7 @@ | |||
apiVersion: v1 | |||
kind: Namespace | |||
metadata: | |||
name: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx |
@@ -0,0 +1,148 @@ | |||
apiVersion: v1 | |||
kind: ServiceAccount | |||
metadata: | |||
name: nginx-ingress-serviceaccount | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
--- | |||
apiVersion: rbac.authorization.k8s.io/v1beta1 | |||
kind: ClusterRole | |||
metadata: | |||
name: nginx-ingress-clusterrole | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
rules: | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- configmaps | |||
- endpoints | |||
- nodes | |||
- pods | |||
- secrets | |||
verbs: | |||
- list | |||
- watch | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- nodes | |||
verbs: | |||
- get | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- services | |||
verbs: | |||
- get | |||
- list | |||
- watch | |||
- apiGroups: | |||
- "extensions" | |||
resources: | |||
- ingresses | |||
verbs: | |||
- get | |||
- list | |||
- watch | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- events | |||
verbs: | |||
- create | |||
- patch | |||
- apiGroups: | |||
- "extensions" | |||
resources: | |||
- ingresses/status | |||
verbs: | |||
- update | |||
--- | |||
apiVersion: rbac.authorization.k8s.io/v1beta1 | |||
kind: Role | |||
metadata: | |||
name: nginx-ingress-role | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
rules: | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- configmaps | |||
- pods | |||
- secrets | |||
- namespaces | |||
verbs: | |||
- get | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- configmaps | |||
resourceNames: | |||
# Defaults to "<election-id>-<ingress-class>" | |||
# Here: "<ingress-controller-leader>-<nginx>" | |||
# This has to be adapted if you change either parameter | |||
# when launching the nginx-ingress-controller. | |||
- "ingress-controller-leader-nginx" | |||
verbs: | |||
- get | |||
- update | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- configmaps | |||
verbs: | |||
- create | |||
- apiGroups: | |||
- "" | |||
resources: | |||
- endpoints | |||
verbs: | |||
- get | |||
--- | |||
apiVersion: rbac.authorization.k8s.io/v1beta1 | |||
kind: RoleBinding | |||
metadata: | |||
name: nginx-ingress-role-nisa-binding | |||
namespace: ingress-nginx | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
roleRef: | |||
apiGroup: rbac.authorization.k8s.io | |||
kind: Role | |||
name: nginx-ingress-role | |||
subjects: | |||
- kind: ServiceAccount | |||
name: nginx-ingress-serviceaccount | |||
namespace: ingress-nginx | |||
--- | |||
apiVersion: rbac.authorization.k8s.io/v1beta1 | |||
kind: ClusterRoleBinding | |||
metadata: | |||
name: nginx-ingress-clusterrole-nisa-binding | |||
labels: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx | |||
roleRef: | |||
apiGroup: rbac.authorization.k8s.io | |||
kind: ClusterRole | |||
name: nginx-ingress-clusterrole | |||
subjects: | |||
- kind: ServiceAccount | |||
name: nginx-ingress-serviceaccount | |||
namespace: ingress-nginx | |||
@@ -0,0 +1,19 @@ | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: ingress-nginx | |||
namespace: ingress-nginx | |||
spec: | |||
# type: NodePort | |||
ports: | |||
- port: 80 | |||
name: http | |||
# nodePort: 30080 | |||
protocol: TCP | |||
- port: 443 | |||
name: https | |||
# nodePort: 30443 | |||
protocol: TCP | |||
selector: | |||
app.kubernetes.io/name: ingress-nginx | |||
app.kubernetes.io/part-of: ingress-nginx |
@@ -0,0 +1,47 @@ | |||
# Default values for ingress-nginx. | |||
# This is a YAML-formatted file. | |||
# Declare variables to be passed into your templates. | |||
replicaCount: 1 | |||
#image: | |||
# repository: nginx | |||
# pullPolicy: IfNotPresent | |||
nameOverride: "" | |||
fullnameOverride: "" | |||
service: | |||
type: ClusterIP | |||
port: 80 | |||
ingress: | |||
enabled: false | |||
annotations: {} | |||
# kubernetes.io/ingress.class: nginx | |||
# kubernetes.io/tls-acme: "true" | |||
path: / | |||
hosts: | |||
- chart-example.local | |||
tls: [] | |||
# - secretName: chart-example-tls | |||
# hosts: | |||
# - chart-example.local | |||
resources: {} | |||
# We usually recommend not to specify default resources and to leave this as a conscious | |||
# choice for the user. This also increases chances charts run on environments with little | |||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | |||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. | |||
# limits: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
tolerations: [] | |||
affinity: {} |
@@ -0,0 +1,22 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj | |||
*.ign.yaml |
@@ -0,0 +1,21 @@ | |||
apiVersion: v1 | |||
name: jupyterlab-proxy | |||
description: Jupyterlab Proxy | |||
# A chart can be either an 'application' or a 'library' chart. | |||
# | |||
# Application charts are a collection of templates that can be packaged into versioned archives | |||
# to be deployed. | |||
# | |||
# Library charts provide useful utilities or functions for the chart developer. They're included as | |||
# a dependency of application charts to inject those utilities and functions into the rendering | |||
# pipeline. Library charts do not define any templates and therefore cannot be deployed. | |||
type: application | |||
# This is the chart version. This version number should be incremented each time you make changes | |||
# to the chart and its templates, including the app version. | |||
version: 0.1.0 | |||
# This is the version number of the application being deployed. This version number should be | |||
# incremented each time you make changes to the application. | |||
appVersion: 1.16.0 |
@@ -0,0 +1,19 @@ | |||
1. Get the application URL by running these commands: | |||
{{- if .Values.ingress.enabled }} | |||
{{- range .Values.ingress.hosts }} | |||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} | |||
{{- end }} | |||
{{- else if contains "NodePort" .Values.service.type }} | |||
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "jupyterlab-proxy.fullname" . }}) | |||
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}") | |||
echo http://$NODE_IP:$NODE_PORT | |||
{{- else if contains "LoadBalancer" .Values.service.type }} | |||
NOTE: It may take a few minutes for the LoadBalancer IP to be available. | |||
You can watch the status of by running 'kubectl get svc -w {{ template "jupyterlab-proxy.fullname" . }}' | |||
export SERVICE_IP=$(kubectl get svc {{ template "jupyterlab-proxy.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') | |||
echo http://$SERVICE_IP:{{ .Values.service.port }} | |||
{{- else if contains "ClusterIP" .Values.service.type }} | |||
export POD_NAME=$(kubectl get pods -l "app={{ template "jupyterlab-proxy.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | |||
echo "Visit http://127.0.0.1:8080 to use your application" | |||
kubectl port-forward $POD_NAME 8080:80 | |||
{{- end }} |
@@ -0,0 +1,53 @@ | |||
{{/* vim: set filetype=mustache: */}} | |||
{{/* | |||
Expand the name of the chart. | |||
*/}} | |||
{{- define "jupyterlab-proxy.name" -}} | |||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Create a default fully qualified app name. | |||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | |||
If release name contains chart name it will be used as a full name. | |||
*/}} | |||
{{- define "jupyterlab-proxy.fullname" -}} | |||
{{- if .Values.fullnameOverride -}} | |||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- $name := default .Chart.Name .Values.nameOverride -}} | |||
{{- if contains $name .Release.Name -}} | |||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}} | |||
{{- else -}} | |||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{- end -}} | |||
{{/* | |||
Create chart name and version as used by the chart label. | |||
*/}} | |||
{{- define "jupyterlab-proxy.chart" -}} | |||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} | |||
{{- end -}} | |||
{{/* | |||
Common labels | |||
*/}} | |||
{{- define "jupyterlab-proxy.labels" -}} | |||
helm.sh/chart: {{ include "jupyterlab-proxy.chart" . }} | |||
{{- if .Chart.AppVersion }} | |||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} | |||
{{- end }} | |||
app.kubernetes.io/managed-by: {{ .Release.Service }} | |||
{{ include "jupyterlab-proxy.select-labels" . }} | |||
{{- end -}} | |||
{{/* | |||
Common select labels | |||
*/}} | |||
{{- define "jupyterlab-proxy.select-labels" -}} | |||
app.kubernetes.io/name: {{ include "jupyterlab-proxy.name" . }} | |||
app.kubernetes.io/instance: {{ include "jupyterlab-proxy.fullname" . }} | |||
app.kubernetes.io/part-of: {{ include "jupyterlab-proxy.name" . }} | |||
{{- end -}} |
@@ -0,0 +1,43 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
metadata: | |||
name: {{ template "jupyterlab-proxy.fullname" . }} | |||
labels: | |||
{{ include "jupyterlab-proxy.labels" . | indent 4 }} | |||
spec: | |||
replicas: {{ .Values.replicaCount }} | |||
selector: | |||
matchLabels: | |||
{{ include "jupyterlab-proxy.select-labels" . | indent 8 }} | |||
template: | |||
metadata: | |||
labels: | |||
{{ include "jupyterlab-proxy.labels" . | indent 8 }} | |||
spec: | |||
containers: | |||
- name: {{ .Chart.Name }} | |||
image: "{{ .Values.global.image.repository.address }}{{ .Values.global.image.repository.pathname }}/{{ .Values.image.name }}:{{ .Values.image.tag }}" | |||
imagePullPolicy: {{ .Values.global.image.pullPolicy }} | |||
env: | |||
- name: SERVER_PORT | |||
value: "{{ .Values.service.targetPort }}" | |||
- name: DefaultTarget | |||
value: '192.168.202.1' | |||
ports: | |||
- name: http | |||
containerPort: {{ .Values.service.targetPort }} | |||
protocol: TCP | |||
resources: | |||
{{ toYaml .Values.resources | indent 12 }} | |||
{{- with .Values.global.nodeSelector }} | |||
nodeSelector: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.affinity }} | |||
affinity: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} | |||
{{- with .Values.tolerations }} | |||
tolerations: | |||
{{ toYaml . | indent 8 }} | |||
{{- end }} |
@@ -0,0 +1,22 @@ | |||
{{- if .Values.ingress.enabled -}} | |||
{{- $fullName := include "jupyterlab-proxy.fullname" . -}} | |||
{{- $ingressPath := .Values.ingress.path -}} | |||
apiVersion: extensions/v1beta1 | |||
kind: Ingress | |||
metadata: | |||
name: {{ $fullName }} | |||
labels: | |||
{{ include "jupyterlab-proxy.labels" . | indent 4 }} | |||
{{- with .Values.ingress.annotations }} | |||
annotations: | |||
{{ toYaml . | indent 4 }} | |||
{{- end }} | |||
spec: | |||
rules: | |||
- http: | |||
paths: | |||
- path: {{ $ingressPath }} | |||
backend: | |||
serviceName: {{ $fullName }} | |||
servicePort: {{ .Values.service.port }} | |||
{{- end }} |
@@ -0,0 +1,15 @@ | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: {{ template "jupyterlab-proxy.fullname" . }} | |||
labels: | |||
{{ include "jupyterlab-proxy.labels" . | indent 4 }} | |||
spec: | |||
type: {{ .Values.service.type }} | |||
ports: | |||
- port: {{ .Values.service.port }} | |||
targetPort: {{ .Values.service.targetPort }} | |||
protocol: TCP | |||
name: http | |||
selector: | |||
{{ include "jupyterlab-proxy.select-labels" . | indent 8 }} |
@@ -0,0 +1,49 @@ | |||
# Default values for jupyterlab-proxy. | |||
# This is a YAML-formatted file. | |||
# Declare variables to be passed into your templates. | |||
replicaCount: 1 | |||
global: | |||
image: | |||
repository: | |||
address: "" | |||
pathname: "openi" | |||
pullPolicy: Always | |||
nodeSelector: {} | |||
image: | |||
name: "jupyterlab-proxy" | |||
tag: "latest" | |||
service: | |||
type: ClusterIP | |||
port: 80 | |||
targetPort: 8080 | |||
ingress: | |||
enabled: true | |||
annotations: {} | |||
# kubernetes.io/ingress.class: nginx | |||
# kubernetes.io/tls-acme: "true" | |||
path: /jpylab | |||
tls: [] | |||
# - secretName: chart-example-tls | |||
# hosts: | |||
# - chart-example.local | |||
resources: {} | |||
# We usually recommend not to specify default resources and to leave this as a conscious | |||
# choice for the user. This also increases chances charts run on environments with little | |||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | |||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. | |||
# limits: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
tolerations: [] | |||
affinity: {} |
@@ -0,0 +1,22 @@ | |||
# Patterns to ignore when building packages. | |||
# This supports shell glob matching, relative path matching, and | |||
# negation (prefixed with !). Only one pattern per line. | |||
.DS_Store | |||
# Common VCS dirs | |||
.git/ | |||
.gitignore | |||
.bzr/ | |||
.bzrignore | |||
.hg/ | |||
.hgignore | |||
.svn/ | |||
# Common backup files | |||
*.swp | |||
*.bak | |||
*.tmp | |||
*~ | |||
# Various IDEs | |||
.project | |||
.idea/ | |||
*.tmproj | |||
*.ign.yaml |
@@ -0,0 +1,4 @@ | |||
apiVersion: v1alpha1 | |||
description: The batch scheduler of Kubernetes | |||
name: kube-batch | |||
version: 0.4.2 |
@@ -0,0 +1,43 @@ | |||
apiVersion: apiextensions.k8s.io/v1beta1 | |||
kind: CustomResourceDefinition | |||
metadata: | |||
name: podgroups.scheduling.incubator.k8s.io | |||
spec: | |||
group: scheduling.incubator.k8s.io | |||
names: | |||
kind: PodGroup | |||
plural: podgroups | |||
scope: Namespaced | |||
validation: | |||
openAPIV3Schema: | |||
properties: | |||
apiVersion: | |||
type: string | |||
kind: | |||
type: string | |||
metadata: | |||
type: object | |||
spec: | |||
properties: | |||
minMember: | |||
format: int32 | |||
type: integer | |||
queue: | |||
type: string | |||
priorityClassName: | |||
type: string | |||
type: object | |||
status: | |||
properties: | |||
succeeded: | |||
format: int32 | |||
type: integer | |||
failed: | |||
format: int32 | |||
type: integer | |||
running: | |||
format: int32 | |||
type: integer | |||
type: object | |||
type: object | |||
version: v1alpha1 |
@@ -0,0 +1,27 @@ | |||
apiVersion: apiextensions.k8s.io/v1beta1 | |||
kind: CustomResourceDefinition | |||
metadata: | |||
name: queues.scheduling.incubator.k8s.io | |||
spec: | |||
group: scheduling.incubator.k8s.io | |||
names: | |||
kind: Queue | |||
plural: queues | |||
scope: Cluster | |||
validation: | |||
openAPIV3Schema: | |||
properties: | |||
apiVersion: | |||
type: string | |||
kind: | |||
type: string | |||
metadata: | |||
type: object | |||
spec: | |||
properties: | |||
weight: | |||
format: int32 | |||
type: integer | |||
type: object | |||
type: object | |||
version: v1alpha1 |
@@ -0,0 +1,43 @@ | |||
apiVersion: apiextensions.k8s.io/v1beta1 | |||
kind: CustomResourceDefinition | |||
metadata: | |||
name: podgroups.scheduling.sigs.dev | |||
spec: | |||
group: scheduling.sigs.dev | |||
names: | |||
kind: PodGroup | |||
plural: podgroups | |||
scope: Namespaced | |||
validation: | |||
openAPIV3Schema: | |||
properties: | |||
apiVersion: | |||
type: string | |||
kind: | |||
type: string | |||
metadata: | |||
type: object | |||
spec: | |||
properties: | |||
minMember: | |||
format: int32 | |||
type: integer | |||
queue: | |||
type: string | |||
priorityClassName: | |||
type: string | |||
type: object | |||
status: | |||
properties: | |||
succeeded: | |||
format: int32 | |||
type: integer | |||
failed: | |||
format: int32 | |||
type: integer | |||
running: | |||
format: int32 | |||
type: integer | |||
type: object | |||
type: object | |||
version: v1alpha2 |
@@ -0,0 +1,39 @@ | |||
apiVersion: apiextensions.k8s.io/v1beta1 | |||
kind: CustomResourceDefinition | |||
metadata: | |||
name: queues.scheduling.sigs.dev | |||
spec: | |||
group: scheduling.sigs.dev | |||
names: | |||
kind: Queue | |||
plural: queues | |||
scope: Cluster | |||
validation: | |||
openAPIV3Schema: | |||
properties: | |||
apiVersion: | |||
type: string | |||
kind: | |||
type: string | |||
metadata: | |||
type: object | |||
spec: | |||
properties: | |||
weight: | |||
format: int32 | |||
type: integer | |||
type: object | |||
status: | |||
properties: | |||
unknown: | |||
format: int32 | |||
type: integer | |||
pending: | |||
format: int32 | |||
type: integer | |||
running: | |||
format: int32 | |||
type: integer | |||
type: object | |||
type: object | |||
version: v1alpha2 |
@@ -0,0 +1,7 @@ | |||
The batch scheduler of Kubernetes. | |||
{{- $enabled := include "schedulerEnabled" . -}} | |||
{{- if eq $enabled "1" -}} | |||
scheduler kube-batch is enabled. | |||
{{- else -}} | |||
scheduler kube-batch is disabled. | |||