macOS CatalinaでTensorflow 1.15.3をビルド

ビルド環境

  • macOS Catalina 10.15.6
  • Python3.8
  • GCC Apple clang version 11.0.3 (clang-1103.0.32.62)
conda create -n tf_38_2 python=3.8
conda activate tf_38_2

pipモジュールのインストール

pip install -U --user pip six 'numpy<1.19.0' wheel setuptools mock 'future>=0.17.1' 'gast==0.2.2' typing_extensions
pip install -U --user keras_applications --no-deps
pip install -U --user keras_preprocessing --no-deps
git clone https://github.com/tensorflow/tensorflow tensorflow_38_2
cd tensorflow_38_2
git checkout v1.15.3

Bazelのインストール

export BAZEL_VERSION=0.26.1
curl -fLO "https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"
chmod +x "bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"
./bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh --user
bazel --version

この2つのパッチを参考にファイルを修正します。

Fix GCC 10.1 compile error. by cbalint13 · Pull Request #40654 · tensorflow/tensorflow · GitHub

py-tensorflow1: fix Python 3.8 build. · macports/macports-ports@f63da02 · GitHub

configureの内容

% ./configure
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.26.1 installed.
Please specify the location of python. [Default is /Users/tak/opt/anaconda3/envs/tf_38_2/bin/python]: 


Found possible Python library paths:
  /Users/tak/opt/anaconda3/envs/tf_38_2/lib/python3.8/site-packages
Please input the desired Python library path to use.  Default is [/Users/tak/opt/anaconda3/envs/tf_38_2/lib/python3.8/site-packages]

Do you wish to build TensorFlow with XLA JIT support? [Y/n]: y
XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: 


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Do you wish to build TensorFlow with iOS support? [y/N]: n
No iOS support will be enabled for TensorFlow.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
    --config=gdr            # Build with GDR support.
    --config=verbs          # Build with libverbs support.
    --config=ngraph         # Build with Intel nGraph support.
    --config=numa           # Build with NUMA support.
    --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
    --config=v2             # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
    --config=noaws          # Disable AWS S3 filesystem support.
    --config=nogcp          # Disable GCP support.
    --config=nohdfs         # Disable HDFS support.
    --config=noignite       # Disable Apache Ignite support.
    --config=nokafka        # Disable Apache Kafka support.
    --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished

ビルド実行

bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

ビルド完了したら以下のパッチを参考にソース修正

Fix TensorFlow on Python 3.8 logger issue by yongtang · Pull Request #33953 · tensorflow/tensorflow · GitHub

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-1.15.3-cp38-cp38-macosx_10_15_x86_64.whl

これでビルドは完了です。

macOS CatalinaでTensorflow 2.3.0をビルド

ビルド環境

  • macOS Catalina 10.15.6
  • Python3.8
  • GCC Apple clang version 11.0.3 (clang-1103.0.32.62)

macOS用のAnacondaをインストール

Anaconda | Individual Edition

conda create -n tf_38 python=3.8
conda activate tf_38

以下がセットアップ内容

Collecting package metadata (current_repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 4.8.3
  latest version: 4.8.4

Please update conda by running

    $ conda update -n base -c defaults conda



## Package Plan ##

  environment location: /Users/tak/opt/anaconda3/envs/tf_38

  added / updated specs:
    - python=3.8


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    pip-20.2.2                 |           py38_0         1.7 MB
    python-3.8.5               |       h26836e1_0        20.7 MB
    setuptools-49.6.0          |           py38_0         747 KB
    ------------------------------------------------------------
                                           Total:        23.1 MB

The following NEW packages will be INSTALLED:

  ca-certificates    pkgs/main/osx-64::ca-certificates-2020.7.22-0
  certifi            pkgs/main/osx-64::certifi-2020.6.20-py38_0
  libcxx             pkgs/main/osx-64::libcxx-10.0.0-1
  libedit            pkgs/main/osx-64::libedit-3.1.20191231-h1de35cc_1
  libffi             pkgs/main/osx-64::libffi-3.3-hb1e8313_2
  ncurses            pkgs/main/osx-64::ncurses-6.2-h0a44026_1
  openssl            pkgs/main/osx-64::openssl-1.1.1g-h1de35cc_0
  pip                pkgs/main/osx-64::pip-20.2.2-py38_0
  python             pkgs/main/osx-64::python-3.8.5-h26836e1_0
  readline           pkgs/main/osx-64::readline-8.0-h1de35cc_0
  setuptools         pkgs/main/osx-64::setuptools-49.6.0-py38_0
  sqlite             pkgs/main/osx-64::sqlite-3.33.0-hffcf06c_0
  tk                 pkgs/main/osx-64::tk-8.6.10-hb0a8c7a_0
  wheel              pkgs/main/noarch::wheel-0.35.1-py_0
  xz                 pkgs/main/osx-64::xz-5.2.5-h1de35cc_0
  zlib               pkgs/main/osx-64::zlib-1.2.11-h1de35cc_3

pipモジュールのインストール

pip install -U --user pip six 'numpy<1.19.0' wheel setuptools mock 'future>=0.17.1' 'gast==0.3.3' typing_extensions
pip install -U --user keras_applications --no-deps
pip install -U --user keras_preprocessing --no-deps
git clone https://github.com/tensorflow/tensorflow tensorflow_38
cd tensorflow_38
git checkout v2.3.0

Bazelのインストール

export BAZEL_VERSION=3.1.0
curl -fLO "https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"
chmod +x "bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh"
./bazel-${BAZEL_VERSION}-installer-darwin-x86_64.sh --user

export PATH="$PATH:$HOME/bin"

bazel --version

Xcodeの設定

sudo xcode-select -s /Applications/Xcode.app/Contents/Developer
sudo xcodebuild -license
bazel clean --expunge

このパッチを参考にファイルを修正します。

Fix GCC 10.1 compile error. by cbalint13 · Pull Request #40654 · tensorflow/tensorflow · GitHub

configureの内容

% ./configure     
You have bazel 3.1.0 installed.
Please specify the location of python. [Default is /Users/tak/opt/anaconda3/envs/tf_38/bin/python3]: 


Found possible Python library paths:
  /Users/tak/opt/anaconda3/envs/tf_38/lib/python3.8/site-packages
Please input the desired Python library path to use.  Default is [/Users/tak/opt/anaconda3/envs/tf_38/lib/python3.8/site-packages]

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:  


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Do you wish to build TensorFlow with iOS support? [y/N]: n
No iOS support will be enabled for TensorFlow.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
    --config=ngraph         # Build with Intel nGraph support.
    --config=numa           # Build with NUMA support.
    --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
    --config=v2             # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
    --config=noaws          # Disable AWS S3 filesystem support.
    --config=nogcp          # Disable GCP support.
    --config=nohdfs         # Disable HDFS support.
    --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished

ビルド開始

bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

インストール作業

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-2.3.0-cp38-cp38-macosx_10_15_x86_64.whl

これでインストール完了です。

iMac proでエンコード

環境

素材

いつもの

  • sony_xavcs_30p.MP4
    • 1920 × 1080 29.97fps
    • 50Mbps
    • Sony RX100M4 X-AVCS
    • 11175F
    • 6分13秒
  • canon_uhd.MP4
    • 3840x2160 29.97fps
    • 120Mbps
    • Canon PowerShot G7 X Mark III
    • 1423F
    • 47秒
  • gh5_422_uhd.MP4
    • 3840x2160 29.97fps
    • 150Mbps
    • LUMIX GH5
    • 1770F
    • 59秒

Adobe Media Encoder編

Input Codec Min:Sec FPS
sony_xavcs_30p.MP4 H.264 1:03 177
HEVC(H.265) 1:12 155
canon_uhd.MP4 H.264 0:14 101
HEVC(H.265) 0:13 109
gh5_422_uhd.MP4 H.264 0:20 88
HEVC(H.265) 0:20 88

sony_xavcs_30p.MP4 のエンコードの場合、ほぼCPUは使われず、GPUだけとなりました。 GPUエンコード律速となっています。 f:id:taku-woohar:20200521192610p:plain

canon_uhd.MP4の場合、先ほどよりはCPUの負荷が上がっていますが、それでも利用率は低いです。 あとH.265はGPUも少し利用率低めです。 f:id:taku-woohar:20200521193036p:plain

gh5_422_uhd.MP4 では大分CPU使用率が高くなりました。主にデコード処理に使われているのだと思います。GPUも使用率高めです。なのでこの動画形式だとCPUもGPUも同等に使い切れる変換形式だと思います。 f:id:taku-woohar:20200521193242p:plain

FFmpeg

FFmpegはバージョン4.2.3を使用します。

Input Codec FPS
sony_xavcs_30p.MP4 libx264 81
h264_videotoolbox 189
libx265 27
hevc_videotoolbox 157
canon_uhd.MP4 libx264 26
h264_videotoolbox 52
libx265 10.36
hevc_videotoolbox 45
gh5_422_uhd.MP4 libx264 28
h264_videotoolbox 52
libx265 12
hevc_videotoolbox 45

HEVC Toolboxが使えるのでh.265は少し早くエンコード出来る様です。

ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx264 -b:v 5000k fhd2fhd_x264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v h264_videotoolbox -b:v 5000k fhd2fhd_toolbox_h264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx265 -b:v 5000k fhd2fhd_x265_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v hevc_videotoolbox -b:v 5000k fhd2fhd_toolbox_h265_1.mp4


ffmpeg -y -i canon_uhd.MP4 -c:v libx264 -b:v 5000k uhd2fhd_x264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -c:v h264_videotoolbox -b:v 5000k uhd2fhd_toolbox_h264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -c:v libx265 -b:v 5000k uhd2fhd_h265_1.mp4
ffmpeg -y -i canon_uhd.MP4 -c:v hevc_videotoolbox -b:v 5000k uhd2fhd_toolbox_h265_1.mp4


ffmpeg -y -i gh5_422_uhd.MP4 -c:v libx264 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_x264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -c:v h264_videotoolbox -b:v 5000k  -pix_fmt yuv420p uhd422_to_fhd_toolbox_h264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -c:v libx265 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_h265_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -c:v hevc_videotoolbox -b:v 5000k  -pix_fmt yuv420p uhd422_to_fhd_toolbox_h265_1.mp4

Windows機で動画エンコード

環境

素材

前回と同じように1080p 29.97Fに変換します。

  • sony_xavcs_30p.MP4
    • 1920 × 1080 29.97fps
    • 50Mbps
    • Sony RX100M4 X-AVCS
    • 11175F
    • 6分13秒
  • canon_uhd.MP4
    • 3840x2160 29.97fps
    • 120Mbps
    • Canon PowerShot G7 X Mark III
    • 1423F
    • 47秒
  • gh5_422_uhd.MP4
    • 3840x2160 29.97fps
    • 150Mbps
    • LUMIX GH5
    • 1770F
    • 59秒

Adobe Media Encoder編

Input Codec Min:Sec FPS
sony_xavcs_30p.MP4 H.264 0:26 429
HEVC(H.265) 0:41 272
canon_uhd.MP4 H.264 0:12 118
HEVC(H.265) 0:13 109
gh5_422_uhd.MP4 H.264 0:27 65
HEVC(H.265) 0:28 63

f:id:taku-woohar:20200512234309p:plain

sony_xavcs_30p.MP4 と canon_uhd.MP4の場合はIntel内臓GPUでデコードを行い、GeForceエンコードを行なっているため、かなり早いです。

f:id:taku-woohar:20200512233954p:plain

gh5_422_uhd.MP4の場合は、デコードをCPUで行なっているためこちらが性能は下がり気味です。

FFmpeg

FFmpegはChocolateyでインストールしたバージョン4.2.2を使用します。

Input Codec FPS
sony_xavcs_30p.MP4 libx264 44
h264_nvenc 266
libx265 16
hevc_nvenc 166
canon_uhd.MP4 libx264 40
h264_nvenc 92
libx265 17.5
hevc_nvenc 91
gh5_422_uhd.MP4 libx264 32
h264_nvenc 80
libx265 21.01
hevc_nvenc 78

ffmpegの場合は全てCPUでデコードを行います。Adobe Media Encoder程速度は出ていない印象です。ただしnvencの性能は高いです。

ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx264 -b:v 5000k fhd2fhd_x264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v h264_nvenc -b:v 5000k fhd2fhd_nvenc_h264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx265 -b:v 5000k fhd2fhd_h265_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v hevc_nvenc -b:v 5000k fhd2fhd_nvenc_hevc_1.mp4


ffmpeg -y -i canon_uhd.MP4 -vf "scale=1920:-1" -c:v libx264 -b:v 5000k uhd2fhd_x264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -vf "scale=1920:-1" -c:v h264_nvenc -b:v 5000k uhd2fhd_nvenc_h264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -vf "scale=1920:-1" -c:v libx265 -b:v 5000k uhd2fhd_h265_1.mp4
ffmpeg -y -i canon_uhd.MP4 -vf "scale=1920:-1" -c:v hevc_nvenc -b:v 5000k uhd2fhd_nvenc_h265_1.mp4


ffmpeg -y -i gh5_422_uhd.MP4 -vf "scale=1920:-1" -c:v libx264 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_x264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -vf "scale=1920:-1" -c:v h264_nvenc -b:v 5000k  -pix_fmt yuv420p uhd422_to_fhd_nvenc_h264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -vf "scale=1920:-1" -c:v libx265 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_h265_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -vf "scale=1920:-1" -c:v hevc_nvenc -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_nvenc_h265_1.mp4

iMacでいろいろ動画エンコード

環境

素材

以下の3つの素材を1920x1080 29.97P に変換する速度を測ります。

  • sony_xavcs_30p.MP4
    • 1920 × 1080 29.97fps
    • 50Mbps
    • Sony RX100M4 X-AVCS
    • 11175F
    • 6分13秒
  • canon_uhd.MP4
    • 3840x2160 29.97fps
    • 120Mbps
    • Canon PowerShot G7 X Mark III
    • 1423F
    • 47秒
  • gh5_422_uhd.MP4
    • 3840x2160 29.97fps
    • 150Mbps
    • LUMIX GH5
    • 1770F
    • 59秒

FFmpeg

FFmpegはバージョン4.2.2を使用します。

Input Codec FPS
sony_xavcs_30p.MP4 libx264 31
h264_videotoolbox 202
libx265 11
canon_uhd.MP4 libx264 11
h264_videotoolbox 53
libx265 4.68
gh5_422_uhd.MP4 libx264 11
h264_videotoolbox 42
libx265 5.0

h264_videotoolboxはmacOSでハードウェアエンコード出来るオプションですが、かなり早いです。

ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx264 -b:v 5000k fhd2fhd_x264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v h264_videotoolbox -b:v 5000k fhd2fhd_toolbox_h264_1.mp4
ffmpeg -y -i sony_xavcs_30p.MP4 -c:v libx265 -b:v 5000k fhd2fhd_x265_1.mp4


ffmpeg -y -i canon_uhd.MP4 -c:v libx264 -b:v 5000k uhd2fhd_x264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -c:v h264_videotoolbox -b:v 5000k uhd2fhd_toolbox_h264_1.mp4
ffmpeg -y -i canon_uhd.MP4 -c:v libx265 -b:v 5000k uhd2fhd_h265_1.mp4


ffmpeg -y -i gh5_422_uhd.MP4 -c:v libx264 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_x264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -c:v h264_videotoolbox -b:v 5000k  -pix_fmt yuv420p uhd422_to_fhd_toolbox_h264_1.mp4
ffmpeg -y -i gh5_422_uhd.MP4 -c:v libx265 -b:v 5000k -pix_fmt yuv420p uhd422_to_fhd_h265_1.mp4

Adobe Media Encoder編

Input Codec Min:Sec FPS
sony_xavcs_30p.MP4 H.264 1:19 141
HEVC(H.265) 4:18 43
canon_uhd.MP4 H.264 1:53 12
HEVC(H.265) 2:02 11
gh5_422_uhd.MP4 H.264 2:31 11
HEVC(H.265) 2:43 10

上記はAdobeのMedia Encoder 14.1 でのエンコード結果です。H.264はMetal ハードウェアエンコード、H.265はCPUでのエンコードになります。 気になる点はH.265はFFmpegの3倍ほどの速度が出ています。Finder情でもQuickTimeが再生可能な形式なので、違うエンコード結果になっている模様です。

FHDのエンコードは、エンコーダの性能が速度を決める模様です。 UHDはエンコーダに依らず、デコード処理がボトルネックになっているかもしれません。

Prometheus始めてみる

以下のVagrantファイルでイメージを作ります。

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.box = "centos/8"
  # https://github.com/dotless-de/vagrant-vbguest/issues/367
  config.vm.box_url = "http://cloud.centos.org/centos/8/x86_64/images/CentOS-8-Vagrant-8.1.1911-20200113.3.x86_64.vagrant-virtualbox.box"

  config.vm.network "private_network", ip: "192.168.33.30"

  config.vm.provision "shell", inline: <<-SHELL
    mv /etc/localtime /etc/localtime.bak
    ln -s /usr/share/zoneinfo/Asia/Tokyo /etc/localtime
    sed -i "/SELINUX/s/enforcing/disabled/g" /etc/selinux/config

    yum install -y java-11-openjdk
  SHELL


  # config.vm.synced_folder "/Users/tak/Documents/program/vagrant/prometheus_vagrant", "/mnt/shared"
end

イメージが出来たらこちらの記事を参考にPrometheusをインストールします。

CentOS8にPrometheusをインストールする

インストールに成功すると以下のURLで状況が見れるようになります。

http://192.168.33.30:9090/graph

次に下のJavaがExporterになります。メモリの状況を通知するプロセスです。

import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpServer;

import java.io.IOException;
import java.net.InetSocketAddress;

public class ExporterTest1 {
    private static final int EXPOSE_PORT = 9092;
    public static void main(String[] args) throws IOException {
        var server = HttpServer.create(new InetSocketAddress(EXPOSE_PORT), 0);
        var context = server.createContext("/");
        context.setHandler(ExporterTest1::handleRequest);
        System.out.println("started at " + EXPOSE_PORT);
        server.start();
    }

    private static void handleRequest(HttpExchange exchange) throws IOException {
        var body = "";
        Runtime runtime = Runtime.getRuntime();
        body += "java_runtime_free_memory\t" + runtime.freeMemory() + "\n";
        var response = body.getBytes();
        exchange.sendResponseHeaders(200, response.length);
        var output = exchange.getResponseBody();
        output.write(response);
        output.close();
    }
}

次は設定ファイルを変更します。

$ sudo vi /usr/prometheus/prometheus.yml

+  - job_name: 'exporter_test'
+    static_configs:
+    - targets: ['localhost:9092']

$ sudo systemctl restart prometheus

Prometheusの上部から Status->Targetをみると

f:id:taku-woohar:20200430220158p:plain
exportの表示
この様に認識されました。 TOPに戻って java_runtime_free_memory -> execute -> grapthでグラフが表示されればOKです。
f:id:taku-woohar:20200430220325p:plain
グラフ

MinikubeでKubernetesを試す

まずは Vagrant でイメージを作成します。

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-19.10"
  config.vm.provider "virtualbox" do |vb|
    vb.cpus = "2"
  end
  config.vm.network "private_network", ip: "192.168.33.100"

  config.vm.provision "shell", inline: <<-SHELL
    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl
    chmod +x ./kubectl
    mv ./kubectl /usr/local/bin/kubectl

    curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64
    chmod +x minikube
    mv minikube /usr/local/bin/

    minikube version
    kubectl version

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
    apt-get install docker-ce docker-ce-cli containerd.io -y

    ufw disable
  SHELL
end

起動します。

vagrant up
vagrant ssh

イメージ内で起動するプロセスを作ります。

git clone https://github.com/kubernetes-up-and-running/kuard.git
cd kuard
sudo make
sed -ie 's/FROM ARG_FROM/FROM alpine/' Dockerfile.kuard
sed -ie 's/ARG_FAKEVER\/ARG_ARCH/blue\/amd64/' Dockerfile.kuard
sudo docker build -t kuard-run:1 . -f Dockerfile.kuard
sudo docker run -d --name kuard  -p 8080:8080 kuard-run:1

ブラウザで http://192.168.33.100:8080/ を開いてみます。

f:id:taku-woohar:20200222211155p:plain
KUAR Demo

多分このような画面が出ると思います。

sudo docker stop kuard

で停止します。

Kubernetesを使います。

export CHANGE_MINIKUBE_NONE_USER=true
sudo -E minikube start --vm-driver=none

まず状態を確認します。

$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 127.0.0.1

$ kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
vagrant   Ready     <none>    15m       v1.8.0

$ kubectl get pods
No resources found.

Podを作ります。

$ kubectl run kuard --image=kuard-run:1 
deployment "kuard" created

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
kuard-86d4765c69-4crbk   1/1       Running   0          7m

とりあえず一旦削除します。

kubectl delete deployments/kuard

kuard.yml を用意します。

apiVersion: v1
kind: Pod
metadata:
  name: kuard
spec:
  containers:
  - name: kuard
    image: kuard-run:1
    ports:
      - containerPort: 8080
        name: http
        protocol: TCP

Podが作成されているのを確認します。

$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
kuard     1/1       Running   0          1m

$ kubectl describe  pods kuard
Name:         kuard
Namespace:    default
Node:         vagrant/10.0.2.15
Start Time:   Sat, 22 Feb 2020 13:19:44 +0000
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"kuard","namespace":"default"},"spec":{"containers":[{"image":"kuard-run:1","name":...
Status:       Running
IP:           172.17.0.4
Containers:
  kuard:
    Container ID:   docker://7b17419be3b56fb76e3539a0702782b9a7749252b214d859143064a7eb2aa1ab
    Image:          kuard-run:1
    Image ID:       docker://sha256:d4081f1dfbe6f426cd1ac49dbb16b2f073e5d18af7022e03fa2c674b77b7f7f4
    Port:           8080/TCP
    State:          Running
      Started:      Sat, 22 Feb 2020 13:19:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkpw5 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-hkpw5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hkpw5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              1m    default-scheduler  Successfully assigned kuard to vagrant
  Normal  SuccessfulMountVolume  1m    kubelet, vagrant   MountVolume.SetUp succeeded for volume "default-token-hkpw5"
  Normal  Pulled                 1m    kubelet, vagrant   Container image "kuard-run:1" already present on machine
  Normal  Created                1m    kubelet, vagrant   Created container
  Normal  Started                1m    kubelet, vagrant   Started container


$ kubectl logs kuard
2020/02/22 13:19:45 Starting kuard version: v0.10.0-blue

Podを削除します。

$ kubectl delete pods/kuard
pod "kuard" deleted

以上です。