git-send-email: SSL_verify_mode 警告

CentOS7 上使用git send-email发送patch,碰到一个SSL的警告,可以通过修改 /usr/share/perl5/Net/SMTP.pm 解决。

$ diff SMTP.pm.orig SMTP.pm -urp
--- SMTP.pm.orig	2019-06-21 23:33:55.298091001 +0000
+++ SMTP.pm	2019-06-21 23:33:29.120829781 +0000
@@ -59,6 +59,7 @@ sub new {
       PeerPort => $arg{Port} || 'smtp(25)',
       LocalAddr => $arg{LocalAddr},
       LocalPort => $arg{LocalPort},
+      SSL_verify_mode => 0,
       Proto     => 'tcp',
       Timeout   => defined $arg{Timeout}
       ? $arg{Timeout}
$ git send-email --no-signed-off-by-cc --suppress-cc=all --to kongjianjun@gmail.com 0001-dist-suppress-the-yaml-load-warning.patch
0001-dist-suppress-the-yaml-load-warning.patch
*******************************************************************
Using the default of SSL_verify_mode of SSL_VERIFY_NONE for client
is deprecated! Please set SSL_verify_mode to SSL_VERIFY_PEER
possibly with SSL_ca_file|SSL_ca_path for verification.
If you really don't want to verify the certificate and keep the
connection open to Man-In-The-Middle attacks please set
SSL_verify_mode explicitly to SSL_VERIFY_NONE in your application.
*******************************************************************
at /usr/libexec/git-core/git-send-email line 1211.
OK. Log says:
Server: smtp.gmail.com
MAIL FROM:<>
RCPT TO:<kongjianjun@gmail.com>
From: Amos Kong <>
To: kongjianjun@gmail.com
Subject: [PATCH scylla] dist: suppress the yaml load warning
Date: Fri, 21 Jun 2019 23:23:17 +0000
X-Mailer: git-send-email 1.8.3.1

Result: 250 2.0.0 OK 1561159400 z18sm3445252pgv.8 - gsmtp

In git 1.7.0, the default has changed to --no-chain-reply-to
Set sendemail.chainreplyto configuration variable to true if
you want to keep --chain-reply-to as your default.

Meet Casper

Casper was my colleague in Red Hat, we haven’t met each other for about 4 years. We knew same friends in Opensource communities before he join in Red Hat. He is working for Aliyun on system software area.

They want to cooperate with Universities, share their cool technology, provide guide and job position (intern or regular) for the students.

They investigated and used machine-learning in system software, one success usecase is key-value search, it’s quicker than hash table, but only for read most situation, that’s popular in their production environment.

“老司机” 学弟学妹们求带:实习机会、独立项目、带领参与开源社区

TO 毕业的XiyouLinuxer:
年前和王亚刚老师聊了一下,目前小组成员第一迫切需要的不是硬件资源和活动经费,而是校内学习所缺乏的项目实践经验。具体可以通过:
  • 分享、提供实习机会
  • 分享一些可对外开放的前沿技术到小组邮件列表/微信群
  • 把公司/单位的独立新项目拿出来、指导小组成员完成
  • 把公司/单位历史项目拿出来,供小组学习
  • 由于受保密的限制上面两条不好实现,所以另一个好的方法就是,分享一些开源社区的信息到小组,带领学弟学妹们解决社区项目的bug,完成一些功能开发
具体还需要王老师再听取学弟学妹们的意见,我发这个邮件是希望大家都能参与进来。
05级孔建军

shadowsocks代理翻墙

Server:
sudo pip install shadowsocks
sudo ssserver -p 1080 -k my_password -m aes-256-cfb –user nobody -d start

Client:
sudo dnf copr enable librehat/shadowsocks
sudo dnf install shadowsocks-qt5

shadownsocks-qt5只是开放一个本地端口(sockv5)与服务器链接做转发,所以还需要再设置浏览器或者系统的代理,或者使用其他自动代理工具。

不知道是否之前SSH隧道代理使用的是免费的Openshift AWS 虚拟机,Google Hangout有时候不稳定,现在在另外一台付费AWS虚拟机上配置了shadowsocks,比之前好很多很多。

More info:
https://github.com/shadowsocks/shadowsocks-qt5/wiki/安装指南

清然走好,大家都保重身体

虽然经常听到劳累猝死的新闻,但是去年得知长安夜话的力闻老师心脏病发作离世后,还是非常震惊。大学、毕业后都经常听他的《长安夜话》,受益匪浅。

今天得知夏清然(qingran.net)于11月21日离世,开始真的不敢相信,希望是搞错了、或者开个玩笑,不过很快BillXu出来确认。他是去日本度假一周,上周六刚回来,周一早上在公司用户封闭开发住宿的别墅内发现出事了,具体原因警察和法医正在调查中(目前有心梗,脑溢血两种说法)。

虽然近几年和他交流不多,但是也通过博客、微博、微信朋友圈经常关注他。确实加班比较多,经常发一些半夜团队还在干活的场景。也看到一些他分享的团队管理、项目心得,有压力,也有动力。

2007年认识BillXu,参与哲思自由软件社团举办的活动和邮件列表讨论,再到后来加入哲思,参与哲思的项目、组织翻译文档,后来到北京也组织哲思沙龙,中间和清然打交道很多,他给人的感觉就是技术geek,命令行啪啪啪操作娴熟。他也给我了很多技术指导。

记得他帮我开通邮箱服务器账号时候,要我的私钥,当时我觉得不应该呀,但一想到是清然大牛要,想着应该没问题。有一次去他们的根据地“井冈山”,他正在处理服务器攻击,终端使用的好像是xterm,字体非常小,别人压根看不清。

认识清然的人,都知道他身体很棒,游泳、单车、跑步、健身锻炼,看着很壮士。工作上也很拼,是个实实在在、本本分分的技术极客。平时说话那么多大道理,不那么看看而谈,但是确实让人感觉是个靠谱的人,让人尊敬佩服的大牛。

清然走好,大家都保重身体。

其他文章:(内容真实性请自行辨别)
http://bbs.tianya.cn/post-funinfo-7340692-1-1.shtml


这个事情发生二十多天了,似乎已经从我们的视野中消散,真是人死如灯灭。搜索引擎(百度)里应该有很多查询,但是基本没有什么相关的消息。微博上也什么都搜不到,似乎被删贴了。重提此时肯定最受伤的就是家人,但是就这么销声匿迹也不能让人接受,一条鲜活的生命需要一个说法。

Ten years of KVM

This article was contributed by Amit Shah

We recently celebrated 25 years of the Linux project. KVM, or Kernel-based Virtual Machine, a part of the Linux kernel, celebrated its 10th anniversary in October. KVM was first announced on 19 October 2006 by its creator, Avi Kivity, in this post to the Linux kernel mailing list.

That first version of the KVM patch set had support for the VMX instructions found in Intel CPUs that were just being introduced around the time of the announcement. Support for AMD’s SVM instructions followed soon after. The KVM patch set was merged in the upstream kernel in December 2006, and was releasedas part of the 2.6.20 kernel in February 2007.

Background

Running multiple guest operating systems on the x86 architecture was quite difficult without the new virtualization extensions: there are instructions that can only be executed from the highest privilege level, ring 0, and such access could not be given to each operating system without it also affecting the operation of the other OSes on the system. Additionally, some instructions do not cause a trap when executed at a lower privilege level — despite them requiring a higher privilege level to function correctly — so running a “hypervisor” that ran in ring 0, while running other OSes in lower-privileged rings was also not a solution.

The VMX and SVM instructions introduced a new ring, ring -1, to the x86 architecture. This is the privilege level where the virtual machine monitor (VMM), or the hypervisor, runs. This VMM arbitrates access to the hardware for the various operating systems so that they can continue running normally in the regular x86 environment.

There are several reasons to run multiple operating systems on one hardware system: deployment and management of OSes becomes easier with tools that can provision virtual machines (VMs). It also leads to lower power and cooling costs by hosting multiple OSes and their corresponding applications and services to run on newer, more capable hardware. Moreover, running legacy operating systems and applications on newer hardware without any changes to adapt to the newer hardware now becomes possible by emulating older hardware via the hypervisor.

The functionality of KVM itself is divided in multiple parts. The generic host kernel KVM module, which exposes the architecture-independent functionality of KVM; the architecture-specific kernel module in the host system; the user-space part that emulates the virtual machine hardware that the guest operating system runs on; and optional guest additions that make the guest perform better on virtualized systems.

At the time KVM was introduced, Xen was the de facto open source hypervisor. Since Xen was introduced before the virtualization extensions were available on x86, it had to use a different design. First, it needed to run a modified guest kernel in order to boot virtual machines. Second, Xen took over the the role of the host kernel, relegating Linux to only manage I/O devices as part of Xen’s special “Dom0” virtual machine. This meant that the system couldn’t truly be called a Linux system — even the guest operating systems were modified Linux kernels with (at the time) non-upstream code.

Kivity started KVM development while working at Israeli startup Qumranet to fix issues with the Xen-related work the company was doing. The original Qumranet product idea was to replicate machine state across two different VMs to achieve fault tolerance. It was soon apparent to the engineers at Qumranet that Xen was too limiting and a poor model for their needs. The virtualization extensions were about to be introduced in AMD and Intel CPUs, so Kivity started a side-project, KVM, that was based on the new hardware virtualization specifications and would be used as the hypervisor for the fault-tolerance solution.

Development model

Since the beginning, Kivity wrote the code with upstreaming it in mind. One of the goals of the KVM model was as much reuse of existing functionality as possible: using Linux to do most of the work, with KVM just being a driver that handled the new virtualization instructions exposed by hardware. This enabled KVM to gain any new features that Linux developers added to the other parts of the system, such as improvements in the CPU scheduler, memory management, power management, and so on.

This model worked well for the rest of the Linux ecosystem as well. Features that started their life with only virtualization in mind began being useful and widely-adopted in general use cases as well, like transparent huge pages. There weren’t two separate communities for the OS and for the VMM; everyone worked as part of one project.

Also, management of the VMs would be easier as each VM could be monitored as a regular process — tools like top and ps worked out of the box. These days, perf can be used to monitor guest activity from the host and identify bottlenecks, if any. Further chipset improvements will also enable guest process perf measurement from the host.

The other side of KVM was in user space, where the machine that is presented to the guest OS is built. kvm-userspace was a fork of the QEMU project. QEMU is a machine emulator — it can run unmodified OS images for a variety of architectures that it supports, and emulate those architecture’s instructions for the host architecture it runs on. This is of course very slow, but the advantage of the QEMU project was that it had quite a few devices already emulated for the x86 architecture — such as the chipset, network cards, display adapters, and so on.

What kvm-userspace did was short-circuit the emulation code to only allow x86-on-x86 and use the KVM API for actually running the guest OS on the host CPU. When the guest OS performs a privileged operation, the CPU will exit to the VMM code. KVM takes over; if it can service the request itself, it would do so, and give control back to the guest. This was a “lightweight exit”. For requests that the KVM code can’t serve, like any device emulation, it would defer to QEMU. This implied exiting to user space from the host Linux kernel, and hence this was called a “heavyweight exit”.

One of the drawbacks in this model was the maintenance of the fork of QEMU. The early focus of the developers was on stabilizing the kernel module, and getting more and more guests to work without a hitch. That meant much less developer time was spent on the device emulation code, and hence the work to redo the hacks to make them suitable for upstream remained at a lower priority.

Xen too used a fork of QEMU for its device emulation in its HVM mode (the mode where Xen used the new hardware virtualization instructions). In addition, QEMU had its own non-upstream Linux kernel accelerator module (KQEMU) for x86-on-x86 that eliminated the emulation layer, making x86 guests run faster on x86 hardware. Integrating all of this required a maintainer who would understand the various needs from all the projects. Anthony Liguori stepped up as a maintainer of the QEMU project, and he had the trust of the Xen and KVM communities. Over time, in small bits, the forks were eliminated, and now KVM as well as Xen use upstream QEMU for their device model emulation.

The “do one thing, do it right” mantra, along with “everything is a file”, was exploited to the fullest. The KVM API allows one to create VMs — or, alternatively, sandboxes — on a Linux system. These can then run operating systems inside them, or just about any code that will not interfere with the running system. This also means that there are other user-space implementations that are not as heavyweight or as featureful as QEMU. Tools that can quickly boot into small applications or specialized OSes with a KVM VM started showing up — with kvmtool being the most popular one.

Developer Interest

Since the original announcement of the KVM project, many hackers were interested in exploring KVM. It helped that hacking on KVM was very convenient: a system reboot wasn’t required to install a new VMM. It was as simple as re-compiling the KVM modules, removing the older modules, and loading the newly-compiled ones. This helped immensely during the early stabilization and improvement phases. Debugging was a much faster process, and developers much preferred this way of working, as contrasted with compiling a new VMM, installing it, updating the boot loader, and rebooting the system. Another advantage, perhaps of lower importance on development systems but nonetheless essential for my work-and-development laptop, was that root permissions were not required to run a virtual machine.

Another handy debugging trick that was made possible by the separation of the KVM module and QEMU was that if something didn’t work in KVM mode, but worked in emulated mode, the fault was very likely in the KVM module. If some guest didn’t work in either of the modes, the fault was in the device model or QEMU.

The early KVM release model helped with a painless development experience as well: even though the KVM project was part of the upstream Linux kernel, Kivity maintained the KVM code on a separate release train. A new KVM release was made regularly that included the source of the KVM modules, a small compatibility layer to compile the KVM modules on any of the supported Linux kernels, and the kvm-userspace piece. This ensured that a distribution kernel, which had an older version of the KVM modules, could be used unchanged by compiling the modules from the newest KVM release for that kernel.

The compatibility layer required some effort to maintain. It needed to ensure that the new KVM code that used newer kernel APIs that were not present on older kernels continued to work, by emulating the new API. This was a one-time cost to add such API compatibility functions, but the barrier to entry for new contributors was significantly reduced. Hackers could download the latest KVM release, compile the modules against whichever kernel they were running, and see virtual machines boot. If that did not work, developers could post bug-fix patches.

Widespread adoption

Chip vendors started taking interest and porting KVM to their architectures: Intel added support for IA64 along with features and stability fixes to x86; IBM added support for s390 and POWER architectures; ARM and Linaro contributed to the ARM port; and Imagination Technologies added MIPS support. These didn’t happen all at once, though. ARM support, for example, came rather late (“it’s the reality that’s not timely, not the prediction”, quipped Kivity during a KVM Forum keynote when he had predicted the previous year that an ARM port would materialize).

Developer interest could also be seen at the KVM Forums, which is an annual gathering of people interested in KVM virtualization. The first KVM Forum in 2007 had a handful of developers in a room where many discussions about the current state of affairs, and where to go in the future, took place. One small group, headed by Rusty Russell, took over the whiteboard and started discussions on what a paravirtualized interface for KVM would look like. This is where VIRTIO started to take shape. These days, the KVM Forum is a whole conference with parallel tracks, tens of speakers, and hundreds of attendees.

As time passed, it was evident the KVM kernel modules were not where most of the action was — the instruction emulation, when required, was more or less complete, and most distributions were shipping recent Linux kernels. The focus had then switched to the user space: adding more device emulation, making existing devices perform better, and so on. The KVM releases then focused more on the user-space part, and the maintenance of the compatibility layer was eased. At this time, even though the kvm-userspace fork existed, effort was made to ensure new features went into the QEMU project rather than the kvm-userspace project. Kivity too started feeding in small changes from the kvm-userspace repository to the QEMU project.

While all this was happening, Qumranet had changed direction, and was now pursuing desktop virtualization with KVM as the hypervisor. In September 2008, Red Hat announced it would acquire Qumranet. Red Hat had supported the Xen hypervisor as its official VMM since the Red Hat Enterprise Linux 5.0 release. With the RHEL 5.4 release, Red Hat started supporting both Xen and KVM as hypervisors. With the release of RHEL 6.0, Red Hat switched to only supporting KVM. KVM continued enjoying out-of-the box support in other distributions as well.

Present and future

Today, there are several projects that use KVM as the default hypervisor: OpenStack and oVirt are the more popular ones. These projects concern themselves with large-scale deployments of KVM hosts and several VMs in one deployment. These come with various use cases, and hence ask of different things from KVM. As guest OSes grow larger (more RAM and virtual CPUs), they become more difficult to live-migrate without incurring too much downtime; Telco deployments need low latency network packet processing, so realtime KVM is an area of interest; and faster disk and network I/O is always an area of research. Keeping everything secure and reducing the hypervisor footprint are also being worked on. The ways in which a malicious guest can break out of its VM sandbox and how to mitigate such attacks is also a prime area of focus.

A lot of advancement happens with new hardware updates and devices. However, a lot of effort is also spent in optimizing the current code base, writing new algorithms, and coming up with new ways to improve performance and scalability with the existing infrastructure.

For the next ten years, the main topics of discussion may well not be about the development of the hypervisor. More interesting will be to see how Linux gets used as a hypervisor, bringing better sandboxing for running untrusted code, especially on mobile phones, and running the cloud infrastructure, by being pervasive as well as invisible at the same time.

From: http://lwn.net/Articles/705160/

Fedora 24 on Dell XPS-13 hangs in shutdown

在笔记本Dell XPS-13上安装的Fedora 24,每次关机都会卡住,有时候等很久会自动关机,有时候着急就直接强制关机了。之前看到一些workaround是把wifi关掉,因为在关机时候有个下面的错误提示。但是我关掉还是卡住。

brcmf_cfg80211_reg_notifier: not a ISO3166 code

一直没有找到workaround,这个相关的bug【1】也没有被解决。今天升级了一下系统,内核升级到了4.7.3-200,问题任然存在。于是又搜索了一下,找到一个workaround【2】,把防火墙禁止掉,已经验证是有效的。上面的错误任然有,但是能正常关机。

【1】https://bugzilla.kernel.org/show_bug.cgi?id=103201
【2】http://forums.fedoraforum.org/showthread.php?t=308235 (#14楼)

扩大亚马逊EC2上EBS分区的存储空间

Expanding the Storage Space of an EBS Volume on Linux

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html

先为分区创建快照,然后再将快照恢复成新的分区,这个时候可以指定新分区大小。把原来磁盘卸掉,加载新分区。这里指定分区名为/dev/sda1,这样默认就是可启动的根分区了。