分类 默认分类 下的文章

xxx Is Damaged and Can’t Be Opened. You Should Move It To The Trash

最近拿到公司 ARM 芯片的 Mac Pro, 一番设置, 可是新新下载的软件, 比如JDK, 总是报下面的错, 无法运行:
“xxx Is Damaged and Can’t Be Opened. You Should Move It To The Trash“
damage.png

如何修复

google 到这个修复方法: https://discussions.apple.com/thread/253714860

$ xattr -c <path/to/application.app>

使用上面的方法对 java 做上述操作, 还是一样的错误, 一度怀疑这个不行. 但是通过 xattr 查询它的属性, 发现又是相关. 最终发现这么解决: 对目录里面每层文件都做这个操作:

eric@Q67J490MY0 bin % pwd
/Users/eric/work/tools/jdks/jdk17.0.3.1/bin
eric@Q67J490MY0 bin % xattr -c *
eric@Q67J490MY0 bin % cd ..
eric@Q67J490MY0 jdk17.0.3.1 % xattr -c *
eric@Q67J490MY0 jdk17.0.3.1 % ./bin/java

上面的操作是对每个文件都去掉xattr的那些属性.

更多

xattr -h #查看帮助

python 使用 cProfile 做 profiling

最近开始看机器学习的项目, 于是开始看 Python 的代码. 把一个机器学习的模型发布上 prod 去预测结果, 发现生产环境里面 的性能很差: 本地 1s 能跑完的 API, 在生产环境需要 30 多毫秒. 先是看了下基本情况, 发现生产环境在预测那段代码, 竟然起了 50 多个 Python 线程. 于是怀疑生产环境因为使用 container, 但是却拿到了宿主机的 CPU 数量, 于是开了很多线程. 但是 container 却限制了 cpu 的使用量, 导致多线程竞争, 最终性能下降.

于是尝试做 profiling: cProfile 是python 自带的.

要做 profiling 的部分:

import os
import time
import cProfile
from transformers import BertTokenizer, BertModel

pretrained_model_path = os.path.abspath(os.path.dirname(__file__)) + '/bert-base-uncased'
bert_tokenizer = BertTokenizer.from_pretrained(pretrained_model_path, cache_dir='/tmp')
bert_model = BertModel.from_pretrained(pretrained_model_path)

s = "This brings us to the downsides"

def bert_function():
    t0 = time.time()
    for i in range(0, 10):
        inputs = bert_tokenizer(s, return_tensors="pt")
        outputs = bert_model(**inputs)

    print("used: {}".format((time.time() - t0)))

cProfile.run('bert_function()', 'my.prof')

执行

python test.py

使用 flameprof 转成 火焰图

python -m flameprof my.prof > my.svg

结果:
out_svg.png

参考:
https://docs.python.org/3/library/profile.html

Prometheus 学习笔记

Introduction

Overview

  1. Prometheus is an open-source systems monitoring and alerting toolkit.
  2. Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
  3. Features

    1. a multi-dimensional data model with time series data identified by metric name and key/value pairs
    2. PromQL, a flexible query language to leverage this dimensionality
    3. no reliance on distributed storage; single server nodes are autonomous
    4. time series collection happens via a pull model over HTTP
    5. pushing time series is supported via an intermediary gateway
    6. targets are discovered via service discovery or static configuration
    7. multiple modes of graphing and dashboarding support
  4. Components

    1. the main Prometheus server which scrapes and stores time series data
    2. client libraries for instrumenting application code
    3. a push gateway for supporting short-lived jobs
    4. special-purpose exporters for services like HAProxy, StatsD, Graphite, etc.
    5. an alertmanager to handle alerts
    6. various support tools
  5. Prometheus configuration file: prometheus.yml

    1. global.scrape_interval
    2. global.evaluation_interval
    3. rule_files: []
    4. scrape_configs: {job_name:"", static_configs:""}
  6. Prometheus server UI

    1. status page: http://<;host>:9090/
    2. self metrics page: http://<;host>:9090/metrics
    3. expression browser: http://<;host>:9090/graph
  7. glossary

    1. The Alertmanager takes in alerts, aggregates them into groups, de-duplicates, applies silences, throttles, and then sends out notifications to email, Pagerduty, Slack etc.

Concepts

  1. Data models

    1. <metric_name>{<label_name>=<label_value>, ...}
    2. metrics_name 符合: /a-zA-Z_:*/ 字母数字下划线分号(分号只是用在定义 recording rule 的)
    3. dimensional(维度)需要通过 labels 定义
    4. time series: streams of timestamped values belonging to the same metric and the same set of labeled dimensions.
    5. 添加/去除等改变 label value 的操作会导致创建新的 time series
    6. label name 符合 /a-zA-Z_*/ 字母数字下划线 (2个连续下划线(__)开头的 label name 是系统保留用的)
    7. label value 可以使用任何 Unicode 字符
    8. A label with an empty label value is considered equivalent to a label that does not exist
  2. Metrics Types

    1. Counter: 单调增长的计数器, 重启后变为0再自增
    2. Gauge: 可增可减的数值
    3. Histogram: 柱状图 指标 basename

      1. <basename>_bucket{le="<upper inclusive bound>"}
      2. <basename>_sum: total sum
      3. <basename>_count: =<basename>_bucket{le="+Inf"}
    4. Summary
  3. Jobs & Instances

    1. When Prometheus scrapes a target, it attaches some labels automatically to the scraped time series which serve to identify the scraped target: job: <job_name> & instance: <host>:<port>

Prometheus

  1. Configuration

    1. command-line flags configure immutable system parameters (such as storage locations, amount of data to keep on disk and in memory, etc.)
    2. configuration file defines everything related to scraping jobs and their instances, as well as which rule files to load.
    3. Prometheus can reload its configuration at runtime.

      1. send SIGHUP;
      2. HTTP POST request to the /-/reload endpoint
    4. scrape_config

      1. Targets with static_configs or dynamic service-discovery;
    5. rule check -> promtool check rules /path/to/example.rules.yml
  2. PromQL

    1. can evaluate: instant vector, range vector, scalar, string
    2. metrics_name{} 可以写为: {__name__="metrics_name"} 比如查询多个 metrics {__name__=~"job:.*"}
    3. subQuery: <instant_query> '[' <range> ':' [<resolution>] ']' [ @ <float_literal> ] [ offset <duration> ] (<resolution> is optional. Default is the global evaluation interval.)
    4. Vector matching

      1. <vector expr> <bin-op> ignoring(<label list>) <vector expr>
      2. <vector expr> <bin-op> on(<label list>) <vector expr>
  3. Storage

    1. format: https://github.com/prometheus/prometheus/blob/release-2.36/tsdb/docs/format/README.md

常见的工具 docker 安装

有一个闲置的 server, 把常用的工具都安装到上面, 方便平时去试验一些东西:

Nginx

sudo docker run --restart always --name nginx -v /home/supra/work/data/nginx_html:/usr/share/nginx/html:ro -v /home/supra/work/data/nginx_config/mime.types:/etc/nginx/mime.types:ro  -p 80:80 -d nginx

mongoDB

sudo docker network create mongo-network
sudo docker run --network mongo-network --restart always -p 27017:27017 --volume /home/supra/work/data/mongo/grafana:/data/db --name mongodb -d mongo
sudo docker run --network mongo-network --restart always -e ME_CONFIG_MONGODB_SERVER=mongodb -p 8081:8081 --name mongoui mongo-express

elasticSearch & kibana

参考: https://www.elastic.co/guide/en/kibana/current/docker.html

sudo docker network create elastic
sudo docker run --restart always --name es01 --network elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -d docker.elastic.co/elasticsearch/elasticsearch:7.16.1
sudo docker run --restart always --name kib01 --network elastic -p 5601:5601 -e "ELASTICSEARCH_HOSTS=http://es01:9200" -d docker.elastic.co/kibana/kibana:7.16.1

Splunk

sudo docker run --restart always -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license"  -e "SPLUNK_PASSWORD=Sre@2021" --name splunk -d splunk/splunk

Clickhouse

docker run -d --name clickhouse-server --ulimit nofile=5120:5120 --volume=/home/supra/work/data/clickhouse:/var/lib/clickhouse -p 8123:8123 -p 9000:9000 yandex/clickhouse-server

Redis

参考: https://hub.docker.com/_/redis

$ docker network create redis-network
$ sudo docker run --network redis-network --restart always --volume /home/supra/work/data/redis/data:/data --name redis -p 6379:6379 -d redis redis-server --save 60 1 --loglevel warning
$ docker run -it --network redis-network --rm redis redis-cli -h redis

Prometheus

参考: https://prometheus.io/docs/prometheus/latest/installation/

sudo docker run -d --restart always --name prometheus -p 9090:9090 -v /home/supra/work/data/prometheus:/etc/prometheus prom/prometheus

MySQL & phpmyadmin

参考: https://hub.docker.com/_/mysql

mkdir  /home/supra/work/data/mysql/data
 docker run --restart always --name mysqld -v /home/supra/work/data/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Sre@2022 -d -p 3306:3306 -e "MYSQL_USER=sre" -e "MYSQL_PASSWORD=Sre@2022" mysql//create phpmyadmin UI
 docker run --restart always --name phpmyadmin -d --link mysqld:db -p 8082:80 phpmyadmin

continuumio/anaconda3

docker run --restart always -d --name anaconda3  -p 8888:8888 continuumio/anaconda3 /bin/bash -c "\
    conda install jupyter -y --quiet && \
    mkdir -p /opt/notebooks && \
    jupyter notebook --NotebookApp.token='' --NotebookApp.password='' \
    --notebook-dir=/opt/notebooks --ip='*' --port=8888 \
    --no-browser --allow-root"

另外在 nginx 的 html 目录放一个 index.html

<li><a target="_blank" href=":5601/">elastic</a></li>
<li><a target="_blank" href=":8000/">Splunk(admin/Sre@2021)if expired, reinstall</a></li>
<li><a target="_blank" href=":8081/">MongoUI</a></li>
<li><a target="_blank" href=":8123/">ClickHouseUI</a></li>
<li><a target="_blank" href="/">RedisUI</a></li>
<li><a target="_blank" href=":9090/">Prometheus</a></li>

<li><a target="_blank" href="https://www.tianxiaohui.com/display/~xiatian/supra">wiki</a></li>

<script>
    (function() {
        const links = document.querySelectorAll("a");
        links.forEach(function(ele){
                ele.href = ele.href.replace("/:", ":");
        });
    })();
</script>

iterm 个性化的改动记录

更改前景色和背景色 如图

colors.png
效果:
result.png

更改 Key Mappings

Profiles -> Keys -> Key Mappings -> Natural Text Editing. 这样做的好处是 FN + 左右箭头. Option + 箭头
key.png

光标闪烁

blind.png