Kubebuilder 進(jìn)階之源碼分析
在前面的文章當(dāng)中我們已經(jīng)完整的完成了一個(gè) Operator 的開發(fā),涉及到了 CURD、預(yù)刪除、Status、Event、OwnerReference、WebHook,也算是將一個(gè) Operator 開發(fā)中會(huì)涉及到的點(diǎn)大部分都了解了一下。kubebuilder 幫我們做了很多事情,讓我們的開發(fā)基本上只需要關(guān)注一個(gè) Reconcile 函數(shù)就可以了,但是從另外一個(gè)方面來講,kubebuilder 目前對(duì)我們來說它還是一個(gè)黑盒,會(huì)產(chǎn)生很多的疑問:
- Reconcile 方法是怎么被觸發(fā)的?
- 怎么識(shí)別到不同的資源?
- 整體是如何進(jìn)行工作的?
- ……
架構(gòu)
我們先來看一下來自官方文檔的這個(gè)架構(gòu)圖[1]
arch
- Process 進(jìn)程通過 main.go啟動(dòng),一般來說一個(gè) Controller 只有一個(gè)進(jìn)程,如果做了高可用的話,會(huì)有多個(gè)
- Manager 每個(gè)進(jìn)程會(huì)有一個(gè) Manager,這是核心組件,主要負(fù)責(zé)
- metrics 的暴露
- webhook 證書
- 初始化共享的 cache
- 初始化共享的 clients 用于和 APIServer 進(jìn)行通信
- 所有的 Controller 的運(yùn)行
- Client 一般來說,我們 創(chuàng)建、更新、刪除某個(gè)資源的時(shí)候會(huì)直接調(diào)用 Client 和 APIServer 進(jìn)行通信
- Cache 負(fù)責(zé)同步 Controller 關(guān)心的資源,其核心是 GVK -> Informer 的映射,一般我們的 Get 和 List 操作都會(huì)從 Cache 中獲取數(shù)據(jù)
- Controller 控制器的業(yè)務(wù)邏輯所在的地方,一個(gè) Manager 可能會(huì)有多個(gè) Controller,我們一般只需要實(shí)現(xiàn) Reconcile 方法就行。圖上的 Predicate 是事件過濾器,我們可以在 Controller 中過濾掉我們不關(guān)心的事件信息
- WebHook 就是我們準(zhǔn)入控制實(shí)現(xiàn)的地方了,主要是有兩類接口,一個(gè)是 MutatingAdmissionWebhook 需要實(shí)現(xiàn) Defaulter 接口,一個(gè)是 ValidatingAdmissionWebhook 需要實(shí)現(xiàn) Validator 接口
源碼分析
了解了基本的架構(gòu)之后,我們就從入口 main.go 開始,看一看 kubebuilder 究竟在后面偷偷的做了哪些事情吧。
main.go
- // 省略了參數(shù)綁定和 error check 的代碼
- func main() {
- var metricsAddr string
- var enableLeaderElection bool
- var probeAddr string
- ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
- mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
- Scheme: scheme,
- MetricsBindAddress: metricsAddr,
- Port: 9443,
- HealthProbeBindAddress: probeAddr,
- LeaderElection: enableLeaderElection,
- LeaderElectionID: "97acaccf.lailin.xyz",
- // CertDir: "config/cert/", // 手動(dòng)指定證書位置用于測(cè)試
- })
- (&controllers.NodePoolReconciler{
- Client: mgr.GetClient(),
- Log: ctrl.Log.WithName("controllers").WithName("NodePool"),
- Scheme: mgr.GetScheme(),
- Recorder: mgr.GetEventRecorderFor("NodePool"),
- }).SetupWithManager(mgr)
- (&nodesv1.NodePool{}).SetupWebhookWithManager(mgr)
- //+kubebuilder:scaffold:builder
- mgr.AddHealthzCheck("healthz", healthz.Ping)
- mgr.AddReadyzCheck("readyz", healthz.Ping)
- setupLog.Info("starting manager")
- mgr.Start(ctrl.SetupSignalHandler())
- }
可以看到 main.go 主要是做了一些啟動(dòng)的工作包括:
- 創(chuàng)建一個(gè) Manager
- 使用剛剛創(chuàng)建的 Manager 創(chuàng)建了一個(gè) Controller
- 啟動(dòng) WebHook
- 添加健康檢查
- 啟動(dòng) Manager
下面我們就順著 main 函數(shù)里面的邏輯一步步的往下看看
NewManger
- // New returns a new Manager for creating Controllers.
- func New(config *rest.Config, options Options) (Manager, error) {
- // 省略配置初始化相關(guān)代碼
- // 創(chuàng)建 cache
- cache, err := options.NewCache(config,
- cache.Options{
- Scheme: options.Scheme, // main 中傳入的 scheme
- Mapper: mapper, // k8s api 和 go type 的轉(zhuǎn)換器
- Resync: options.SyncPeriod, // 默認(rèn) 10 小時(shí),一般不要改
- Namespace: options.Namespace, // 需要監(jiān)聽的 namespace
- })
- // 創(chuàng)建和 APIServer 交互的 client,讀寫分離
- clientOptions := client.Options{Scheme: options.Scheme, Mapper: mapper}
- apiReader, err := client.New(config, clientOptions)
- writeObj, err := options.ClientBuilder.
- WithUncached(options.ClientDisableCacheFor...).
- Build(cache, config, clientOptions)
- if options.DryRunClient {
- writeObj = client.NewDryRunClient(writeObj)
- }
- // 創(chuàng)建事件記錄器
- recorderProvider, err := options.newRecorderProvider(config, options.Scheme, options.Logger.WithName("events"), options.makeBroadcaster)
- // 需要需要高可用的話,創(chuàng)建選舉相關(guān)的配置
- leaderConfig := config
- if options.LeaderElectionConfig != nil {
- leaderConfig = options.LeaderElectionConfig
- }
- resourceLock, err := options.newResourceLock(leaderConfig, recorderProvider, leaderelection.Options{
- LeaderElection: options.LeaderElection,
- LeaderElectionResourceLock: options.LeaderElectionResourceLock,
- LeaderElectionID: options.LeaderElectionID,
- LeaderElectionNamespace: options.LeaderElectionNamespace,
- })
- // 創(chuàng)建 metric 和 健康檢查的接口
- metricsListener, err := options.newMetricsListener(options.MetricsBindAddress)
- // By default we have no extra endpoints to expose on metrics http server.
- metricsExtraHandlers := make(map[string]http.Handler)
- // Create health probes listener. This will throw an error if the bind
- // address is invalid or already in use.
- healthProbeListener, err := options.newHealthProbeListener(options.HealthProbeBindAddress)
- if err != nil {
- return nil, err
- }
- // 最后將這些配置放到 manager 中
- return &controllerManager{
- config: config,
- scheme: options.Scheme,
- cache: cache,
- fieldIndexes: cache,
- client: writeObj,
- apiReader: apiReader,
- recorderProvider: recorderProvider,
- resourceLock: resourceLock,
- mapper: mapper,
- metricsListener: metricsListener,
- metricsExtraHandlers: metricsExtraHandlers,
- logger: options.Logger,
- elected: make(chan struct{}),
- port: options.Port,
- host: options.Host,
- certDir: options.CertDir,
- leaseDuration: *options.LeaseDuration,
- renewDeadline: *options.RenewDeadline,
- retryPeriod: *options.RetryPeriod,
- healthProbeListener: healthProbeListener,
- readinessEndpointName: options.ReadinessEndpointName,
- livenessEndpointName: options.LivenessEndpointName,
- gracefulShutdownTimeout: *options.GracefulShutdownTimeout,
- internalProceduresStop: make(chan struct{}),
- }, nil
- }
創(chuàng)建 Cache
- func New(config *rest.Config, opts Options) (Cache, error) {
- opts, err := defaultOpts(config, opts)
- if err != nil {
- return nil, err
- }
- im := internal.NewInformersMap(config, opts.Scheme, opts.Mapper, *opts.Resync, opts.Namespace)
- return &informerCache{InformersMap: im}, nil
- }
這里主要是調(diào)用 NewInformersMap方法創(chuàng)建 Informer 的映射
- func NewInformersMap(config *rest.Config,
- scheme *runtime.Scheme,
- mapper meta.RESTMapper,
- resync time.Duration,
- namespace string) *InformersMap {
- return &InformersMap{
- structured: newStructuredInformersMap(config, scheme, mapper, resync, namespace),
- unstructured: newUnstructuredInformersMap(config, scheme, mapper, resync, namespace),
- metadata: newMetadataInformersMap(config, scheme, mapper, resync, namespace),
- Scheme: scheme,
- }
- }
NewInformersMap會(huì)去分別創(chuàng)建,結(jié)構(gòu)化、非結(jié)構(gòu)化以及 metadata 的 InformerMap 而這些方法最后都會(huì)去調(diào)用 newSpecificInformersMap方法,區(qū)別就是不同的方法傳入的 createListWatcherFunc 參數(shù)不同
- func newSpecificInformersMap(config *rest.Config,
- scheme *runtime.Scheme,
- mapper meta.RESTMapper,
- resync time.Duration,
- namespace string,
- createListWatcher createListWatcherFunc) *specificInformersMap {
- ip := &specificInformersMap{
- config: config,
- Scheme: scheme,
- mapper: mapper,
- informersByGVK: make(map[schema.GroupVersionKind]*MapEntry),
- codecs: serializer.NewCodecFactory(scheme),
- paramCodec: runtime.NewParameterCodec(scheme),
- resync: resync,
- startWait: make(chan struct{}),
- createListWatcher: createListWatcher,
- namespace: namespace,
- }
- return ip
- }
newSpecificInformersMap 和常規(guī)的 InformersMap 類似,區(qū)別是沒實(shí)現(xiàn) WaitForCacheSync方法
以結(jié)構(gòu)化的傳入的 createStructuredListWatch 為例,主要是返回一個(gè)用于創(chuàng)建 SharedIndexInformer 的 ListWatch 對(duì)象
- func createStructuredListWatch(gvk schema.GroupVersionKind, ip *specificInformersMap) (*cache.ListWatch, error) {
- // Kubernetes APIs work against Resources, not GroupVersionKinds. Map the
- // groupVersionKind to the Resource API we will use.
- mapping, err := ip.mapper.RESTMapping(gvk.GroupKind(), gvk.Version)
- if err != nil {
- return nil, err
- }
- client, err := apiutil.RESTClientForGVK(gvk, false, ip.config, ip.codecs)
- if err != nil {
- return nil, err
- }
- listGVK := gvk.GroupVersion().WithKind(gvk.Kind + "List")
- listObj, err := ip.Scheme.New(listGVK)
- if err != nil {
- return nil, err
- }
- // TODO: the functions that make use of this ListWatch should be adapted to
- // pass in their own contexts instead of relying on this fixed one here.
- ctx := context.TODO()
- // Create a new ListWatch for the obj
- return &cache.ListWatch{
- ListFunc: func(opts metav1.ListOptions) (runtime.Object, error) {
- res := listObj.DeepCopyObject()
- isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot
- err := client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Do(ctx).Into(res)
- return res, err
- },
- // Setup the watch function
- WatchFunc: func(opts metav1.ListOptions) (watch.Interface, error) {
- // Watch needs to be set to true separately
- opts.Watch = true
- isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot
- return client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Watch(ctx)
- },
- }, nil
- }
小結(jié): cache 主要是創(chuàng)建了一些 InformerMap,完成了 GVK 到 Informer 的映射,每個(gè) Informer 會(huì)根據(jù) ListWatch 函數(shù)對(duì)對(duì)應(yīng)的 GVK 進(jìn)行 List 和 Watch。
創(chuàng)建 Client
- func New(config *rest.Config, options Options) (Client, error) {
- if config == nil {
- return nil, fmt.Errorf("must provide non-nil rest.Config to client.New")
- }
- // Init a scheme if none provided
- if options.Scheme == nil {
- options.Scheme = scheme.Scheme
- }
- // Init a Mapper if none provided
- if options.Mapper == nil {
- var err error
- options.Mapper, err = apiutil.NewDynamicRESTMapper(config)
- if err != nil {
- return nil, err
- }
- }
- clientcache := &clientCache{
- config: config,
- scheme: options.Scheme,
- mapper: options.Mapper,
- codecs: serializer.NewCodecFactory(options.Scheme),
- structuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta),
- unstructuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta),
- }
- rawMetaClient, err := metadata.NewForConfig(config)
- if err != nil {
- return nil, fmt.Errorf("unable to construct metadata-only client for use as part of client: %w", err)
- }
- c := &client{
- typedClient: typedClient{
- cache: clientcache,
- paramCodec: runtime.NewParameterCodec(options.Scheme),
- },
- unstructuredClient: unstructuredClient{
- cache: clientcache,
- paramCodec: noConversionParamCodec{},
- },
- metadataClient: metadataClient{
- client: rawMetaClient,
- restMapper: options.Mapper,
- },
- scheme: options.Scheme,
- mapper: options.Mapper,
- }
- return c, nil
- }
client 創(chuàng)建了兩個(gè)一個(gè)用于讀,一個(gè)用于寫,用于讀的會(huì)直接使用上面的 cache,用于寫的才會(huì)直接和 APIServer 進(jìn)行交互
Controller
下面我們看一下核心的 Controller 是怎么初始化和工作的
- if err = (&controllers.NodePoolReconciler{
- Client: mgr.GetClient(),
- Log: ctrl.Log.WithName("controllers").WithName("NodePool"),
- Scheme: mgr.GetScheme(),
- Recorder: mgr.GetEventRecorderFor("NodePool"),
- }).SetupWithManager(mgr); err != nil {
- setupLog.Error(err, "unable to create controller", "controller", "NodePool")
- os.Exit(1)
- }
main.go 的方法里面主要是初始化了 Controller 的結(jié)構(gòu)體,然后調(diào)用了 SetupWithManager方法
- // SetupWithManager sets up the controller with the Manager.
- func (r *NodePoolReconciler) SetupWithManager(mgr ctrl.Manager) error {
- return ctrl.NewControllerManagedBy(mgr).
- For(&nodesv1.NodePool{}).
- Watches(&source.Kind{Type: &corev1.Node{}}, handler.Funcs{UpdateFunc: r.nodeUpdateHandler}).
- Complete(r)
- }
SetupWithManager之前有講到過,主要是使用了建造者模式,去構(gòu)建了我們需要監(jiān)聽的對(duì)象,只有這些對(duì)象的相關(guān)事件才會(huì)觸發(fā)我們的 Reconcile 邏輯。這里面的 Complete 最后其實(shí)是調(diào)用了 Build 方法
- func (blder *Builder) Build(r reconcile.Reconciler) (controller.Controller, error) {
- // 省略參數(shù)校驗(yàn)
- // Set the Config
- blder.loadRestConfig()
- // Set the ControllerManagedBy
- if err := blder.doController(r); err != nil {
- return nil, err
- }
- // Set the Watch
- if err := blder.doWatch(); err != nil {
- return nil, err
- }
- return blder.ctrl, nil
- }
Build主要調(diào)用 doController 、doWatch兩個(gè)方法
- func (blder *Builder) doController(r reconcile.Reconciler) error {
- ctrlOptions := blder.ctrlOptions
- if ctrlOptions.Reconciler == nil {
- ctrlOptions.Reconciler = r
- }
- // Retrieve the GVK from the object we're reconciling
- // to prepopulate logger information, and to optionally generate a default name.
- gvk, err := getGvk(blder.forInput.object, blder.mgr.GetScheme())
- if err != nil {
- return err
- }
- // Setup the logger.
- if ctrlOptions.Log == nil {
- ctrlOptions.Log = blder.mgr.GetLogger()
- }
- ctrlOptions.Log = ctrlOptions.Log.WithValues("reconciler group", gvk.Group, "reconciler kind", gvk.Kind)
- // Build the controller and return.
- blder.ctrl, err = newController(blder.getControllerName(gvk), blder.mgr, ctrlOptions)
- return err
- }
doController主要是初始化了一個(gè) Controller,這里面?zhèn)魅肓宋覀儗?shí)現(xiàn) 的Reconciler以及獲取到我們的 GVK 的名稱
- func (blder *Builder) doWatch() error {
- // Reconcile type
- typeForSrc, err := blder.project(blder.forInput.object, blder.forInput.objectProjection)
- if err != nil {
- return err
- }
- src := &source.Kind{Type: typeForSrc}
- hdler := &handler.EnqueueRequestForObject{}
- allPredicates := append(blder.globalPredicates, blder.forInput.predicates...)
- if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil {
- return err
- }
- // Watches the managed types
- for _, own := range blder.ownsInput {
- typeForSrc, err := blder.project(own.object, own.objectProjection)
- if err != nil {
- return err
- }
- src := &source.Kind{Type: typeForSrc}
- hdler := &handler.EnqueueRequestForOwner{
- OwnerType: blder.forInput.object,
- IsController: true,
- }
- allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...)
- allPredicates = append(allPredicates, own.predicates...)
- if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil {
- return err
- }
- }
- // Do the watch requests
- for _, w := range blder.watchesInput {
- allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...)
- allPredicates = append(allPredicates, w.predicates...)
- // If the source of this watch is of type *source.Kind, project it.
- if srckind, ok := w.src.(*source.Kind); ok {
- typeForSrc, err := blder.project(srckind.Type, w.objectProjection)
- if err != nil {
- return err
- }
- srckind.Type = typeForSrc
- }
- if err := blder.ctrl.Watch(w.src, w.eventhandler, allPredicates...); err != nil {
- return err
- }
- }
- return nil
- }
Watch 主要是監(jiān)聽我們想要的資源變化,blder.ctrl.Watch(src, hdler, allPredicates...)通過過濾源事件的變化,allPredicates是過濾器,只有所有的過濾器都返回 true 時(shí),才會(huì)將事件傳遞給 EventHandler hdler,這里會(huì)將 Handler 注冊(cè)到 Informer 上
啟動(dòng)
- func (cm *controllerManager) Start(ctx context.Context) (err error) {
- cm.internalCtx, cm.internalCancel = context.WithCancel(ctx)
- // 這個(gè)用來表示所有的協(xié)程都已經(jīng)退出了,
- stopComplete := make(chan struct{})
- defer close(stopComplete)
- // ......
- // 用于保存錯(cuò)誤
- cm.errChan = make(chan error)
- // 如果需要 metric 就啟動(dòng) metric 服務(wù)
- if cm.metricsListener != nil {
- go cm.serveMetrics()
- }
- // 啟動(dòng)健康檢查服務(wù)
- if cm.healthProbeListener != nil {
- go cm.serveHealthProbes()
- }
- go cm.startNonLeaderElectionRunnables()
- go func() {
- if cm.resourceLock != nil {
- err := cm.startLeaderElection()
- if err != nil {
- cm.errChan <- err
- }
- } else {
- // Treat not having leader election enabled the same as being elected.
- close(cm.elected)
- go cm.startLeaderElectionRunnables()
- }
- }()
- // 判斷是否需要退出
- select {
- case <-ctx.Done():
- // We are done
- return nil
- case err := <-cm.errChan:
- // Error starting or running a runnable
- return err
- }
- }
無論是不是 leader 最后都會(huì)使用 startRunnable 啟動(dòng) Controller
- func (cm *controllerManager) startNonLeaderElectionRunnables() {
- cm.mu.Lock()
- defer cm.mu.Unlock()
- cm.waitForCache(cm.internalCtx)
- // Start the non-leaderelection Runnables after the cache has synced
- for _, c := range cm.nonLeaderElectionRunnables {
- // Controllers block, but we want to return an error if any have an error starting.
- // Write any Start errors to a channel so we can return them
- cm.startRunnable(c)
- }
- }
實(shí)際上是調(diào)用了 Controller 的 Start方法
- // Start implements controller.Controller
- func (c *Controller) Start(ctx context.Context) error {
- // Controller 只能被執(zhí)行一次
- c.mu.Lock()
- if c.Started {
- return errors.New("controller was started more than once. This is likely to be caused by being added to a manager multiple times")
- }
- // Set the internal context.
- c.ctx = ctx
- // 獲取隊(duì)列
- c.Queue = c.MakeQueue()
- defer c.Queue.ShutDown()
- err := func() error {
- defer c.mu.Unlock()
- defer utilruntime.HandleCrash()
- // 嘗試等待緩存
- for _, watch := range c.startWatches {
- c.Log.Info("Starting EventSource", "source", watch.src)
- if err := watch.src.Start(ctx, watch.handler, c.Queue, watch.predicates...); err != nil {
- return err
- }
- }
- // 啟動(dòng) Controller
- c.Log.Info("Starting Controller")
- for _, watch := range c.startWatches {
- syncingSource, ok := watch.src.(source.SyncingSource)
- if !ok {
- continue
- }
- if err := syncingSource.WaitForSync(ctx); err != nil {
- // This code is unreachable in case of kube watches since WaitForCacheSync will never return an error
- // Leaving it here because that could happen in the future
- err := fmt.Errorf("failed to wait for %s caches to sync: %w", c.Name, err)
- c.Log.Error(err, "Could not wait for Cache to sync")
- return err
- }
- }
- // All the watches have been started, we can reset the local slice.
- //
- // We should never hold watches more than necessary, each watch source can hold a backing cache,
- // which won't be garbage collected if we hold a reference to it.
- c.startWatches = nil
- if c.JitterPeriod == 0 {
- c.JitterPeriod = 1 * time.Second
- }
- // Launch workers to process resources
- c.Log.Info("Starting workers", "worker count", c.MaxConcurrentReconciles)
- ctrlmetrics.WorkerCount.WithLabelValues(c.Name).
- Set(float64(c.MaxConcurrentReconciles))
- for i := 0; i < c.MaxConcurrentReconciles; i++ {
- go wait.UntilWithContext(ctx, func(ctx context.Context) {
- // 查詢隊(duì)列中有沒有關(guān)注的事件,有的話就觸發(fā)我們的 reconcile 邏輯
- for c.processNextWorkItem(ctx) {
- }
- }, c.JitterPeriod)
- }
- c.Started = true
- return nil
- }()
- if err != nil {
- return err
- }
- <-ctx.Done()
- c.Log.Info("Stopping workers")
- return nil
- }
- // attempt to process it, by calling the reconcileHandler.
- func (c *Controller) processNextWorkItem(ctx context.Context) bool {
- obj, shutdown := c.Queue.Get()
- if shutdown {
- // Stop working
- return false
- }
- // We call Done here so the workqueue knows we have finished
- // processing this item. We also must remember to call Forget if we
- // do not want this work item being re-queued. For example, we do
- // not call Forget if a transient error occurs, instead the item is
- // put back on the workqueue and attempted again after a back-off
- // period.
- defer c.Queue.Done(obj)
- ctrlmetrics.ActiveWorkers.WithLabelValues(c.Name).Add(1)
- defer ctrlmetrics.ActiveWorkers.WithLabelValues(c.Name).Add(-1)
- c.reconcileHandler(ctx, obj)
- return true
- }
總結(jié)
Reconcile 方法的觸發(fā)是通過 Cache 中的 Informer 獲取到資源的變更事件,然后再通過生產(chǎn)者消費(fèi)者的模式觸發(fā)我們自己實(shí)現(xiàn)的 Reconcile 方法的。
Kubebuilder 是一個(gè)非常好用的 Operator 開發(fā)框架,不僅極大的簡化了 Operator 的開發(fā)過程,并且充分的利用了 go interface 的特性留下了足夠的擴(kuò)展性,這個(gè)我們可以學(xué)習(xí),如果我們的業(yè)務(wù)代碼開發(fā)框架能夠做到這個(gè)地步,我覺得也就不錯(cuò)了
參考文獻(xiàn)
架構(gòu)圖 https://master.book.kubebuilder.io/architecture.html?
本文轉(zhuǎn)載自微信公眾號(hào)「mohuishou」,可以通過以下二維碼關(guān)注。轉(zhuǎn)載本文請(qǐng)聯(lián)系mohuishou公眾號(hào)。
原文鏈接:https://lailin.xyz/post/operator-09-kubebuilder-code.html