[转发] 招聘 | 40K – 60K 招资深架构师、资深软件工程师

2017-05-11 CSDN企业招聘

北京涛思数据科技有限公司

涛思数据(TAOS Data)专注实时大数据的高效采集、存储、查询和分析。其自主开发的核心引擎写入数据的速度比普通数据库快五倍以上,查询速度快十倍以上。技术能广泛运用于物联网和金融行业,能大幅提升大数据系统的性能和容量,并降低运营成本。

2017年5月,涛思数据获得知名天使投资人薛蛮子和明势资本的投资。核心团队成员有多年在通讯设备、数据库、存储、大数据等领域的研发经验,创始人陶建辉曾成功创办“和信”与“快乐妈咪”两家高科技企业。

Continue reading “[转发] 招聘 | 40K – 60K 招资深架构师、资深软件工程师”

“老司机” 学弟学妹们求带:实习机会、独立项目、带领参与开源社区

TO 毕业的XiyouLinuxer:
年前和王亚刚老师聊了一下,目前小组成员第一迫切需要的不是硬件资源和活动经费,而是校内学习所缺乏的项目实践经验。具体可以通过:
  • 分享、提供实习机会
  • 分享一些可对外开放的前沿技术到小组邮件列表/微信群
  • 把公司/单位的独立新项目拿出来、指导小组成员完成
  • 把公司/单位历史项目拿出来,供小组学习
  • 由于受保密的限制上面两条不好实现,所以另一个好的方法就是,分享一些开源社区的信息到小组,带领学弟学妹们解决社区项目的bug,完成一些功能开发
具体还需要王老师再听取学弟学妹们的意见,我发这个邮件是希望大家都能参与进来。
05级孔建军

用手机浏览器写博客

最近把一些不常用的手机客户端删除了,使用手机浏览器的移动版,最大的体验就是简洁、实用、高效。电脑网页版设计最复杂,其次是手机App。越是资源丰富的地方越被设计者滥用,而在资源有限的地方只做了最关键有用的事情。

之前用过wordpress的客户端,已经很方便了,比网页访问速度快,有些内容是延迟同步的。这个手机浏览器的H5版本也够简洁。

shadowsocks代理翻墙

Server:
sudo pip install shadowsocks
sudo ssserver -p 1080 -k my_password -m aes-256-cfb –user nobody -d start

Client:
sudo dnf copr enable librehat/shadowsocks
sudo dnf install shadowsocks-qt5

shadownsocks-qt5只是开放一个本地端口(sockv5)与服务器链接做转发,所以还需要再设置浏览器或者系统的代理,或者使用其他自动代理工具。

不知道是否之前SSH隧道代理使用的是免费的Openshift AWS 虚拟机,Google Hangout有时候不稳定,现在在另外一台付费AWS虚拟机上配置了shadowsocks,比之前好很多很多。

More info:
https://github.com/shadowsocks/shadowsocks-qt5/wiki/安装指南

泛系、自由与“一、百、万”工程

徐永久注:洪峰先生是我国自由软件界的先锋人物,今年开始独立发行 Free Software 杂志。本文以回忆录的形式,叙述了他自己从大学毕业以来,走向事业成功所经历的酸甜苦辣。本文是征得他同意后,发表在本网站的。

(一) 初识吴学谋教授
(二)迷雾中的引路人
(三) 蹉跎岁月
(四)走出混沌
(五)漫漫求索路
(六)初试网络的威力
(七)尖果中的爪哇 — 难入出版界
(八)结识斯托曼博士
(九)奥莱理变奏曲
(十)峰回路转九寨沟
(十一)从泛系观看自由软件
(十二) 东山再起 — “一、百、万”工程
结语

(一) 初识吴学谋教授

我第一次听说泛系和吴学谋教授的名字大约是在1990年年初。当时我年方22岁,血气方刚,在某家国营的外贸公司工作,由于我的英文很好,比当时那里所有的人的水平都高,经常奉命出席参加外事商务谈判活动,而很多人干了几十年,但是语言不过关,因此只能做我的助手,因此我被他们嫉妒得不得了。 由于我不谙处理人际关系,终被同事暗算,从主管的职务“贬”到储运部门当小职员,
而且还派了一个终日喋喋不休的老妇人监督我。我开始过起了无聊的日子,眼看着时间一天天地过去了,心中感到无比的苦闷。

阅读全部:http://devrel.qiniucdn.com/data/20081219132542/index.html

清然走好,大家都保重身体

虽然经常听到劳累猝死的新闻,但是去年得知长安夜话的力闻老师心脏病发作离世后,还是非常震惊。大学、毕业后都经常听他的《长安夜话》,受益匪浅。

今天得知夏清然(qingran.net)于11月21日离世,开始真的不敢相信,希望是搞错了、或者开个玩笑,不过很快BillXu出来确认。他是去日本度假一周,上周六刚回来,周一早上在公司用户封闭开发住宿的别墅内发现出事了,具体原因警察和法医正在调查中(目前有心梗,脑溢血两种说法)。

虽然近几年和他交流不多,但是也通过博客、微博、微信朋友圈经常关注他。确实加班比较多,经常发一些半夜团队还在干活的场景。也看到一些他分享的团队管理、项目心得,有压力,也有动力。

2007年认识BillXu,参与哲思自由软件社团举办的活动和邮件列表讨论,再到后来加入哲思,参与哲思的项目、组织翻译文档,后来到北京也组织哲思沙龙,中间和清然打交道很多,他给人的感觉就是技术geek,命令行啪啪啪操作娴熟。他也给我了很多技术指导。

记得他帮我开通邮箱服务器账号时候,要我的私钥,当时我觉得不应该呀,但一想到是清然大牛要,想着应该没问题。有一次去他们的根据地“井冈山”,他正在处理服务器攻击,终端使用的好像是xterm,字体非常小,别人压根看不清。

认识清然的人,都知道他身体很棒,游泳、单车、跑步、健身锻炼,看着很壮士。工作上也很拼,是个实实在在、本本分分的技术极客。平时说话那么多大道理,不那么看看而谈,但是确实让人感觉是个靠谱的人,让人尊敬佩服的大牛。

清然走好,大家都保重身体。

其他文章:(内容真实性请自行辨别)
http://bbs.tianya.cn/post-funinfo-7340692-1-1.shtml


这个事情发生二十多天了,似乎已经从我们的视野中消散,真是人死如灯灭。搜索引擎(百度)里应该有很多查询,但是基本没有什么相关的消息。微博上也什么都搜不到,似乎被删贴了。重提此时肯定最受伤的就是家人,但是就这么销声匿迹也不能让人接受,一条鲜活的生命需要一个说法。

Ten years of KVM

This article was contributed by Amit Shah

We recently celebrated 25 years of the Linux project. KVM, or Kernel-based Virtual Machine, a part of the Linux kernel, celebrated its 10th anniversary in October. KVM was first announced on 19 October 2006 by its creator, Avi Kivity, in this post to the Linux kernel mailing list.

That first version of the KVM patch set had support for the VMX instructions found in Intel CPUs that were just being introduced around the time of the announcement. Support for AMD’s SVM instructions followed soon after. The KVM patch set was merged in the upstream kernel in December 2006, and was releasedas part of the 2.6.20 kernel in February 2007.

Background

Running multiple guest operating systems on the x86 architecture was quite difficult without the new virtualization extensions: there are instructions that can only be executed from the highest privilege level, ring 0, and such access could not be given to each operating system without it also affecting the operation of the other OSes on the system. Additionally, some instructions do not cause a trap when executed at a lower privilege level — despite them requiring a higher privilege level to function correctly — so running a “hypervisor” that ran in ring 0, while running other OSes in lower-privileged rings was also not a solution.

The VMX and SVM instructions introduced a new ring, ring -1, to the x86 architecture. This is the privilege level where the virtual machine monitor (VMM), or the hypervisor, runs. This VMM arbitrates access to the hardware for the various operating systems so that they can continue running normally in the regular x86 environment.

There are several reasons to run multiple operating systems on one hardware system: deployment and management of OSes becomes easier with tools that can provision virtual machines (VMs). It also leads to lower power and cooling costs by hosting multiple OSes and their corresponding applications and services to run on newer, more capable hardware. Moreover, running legacy operating systems and applications on newer hardware without any changes to adapt to the newer hardware now becomes possible by emulating older hardware via the hypervisor.

The functionality of KVM itself is divided in multiple parts. The generic host kernel KVM module, which exposes the architecture-independent functionality of KVM; the architecture-specific kernel module in the host system; the user-space part that emulates the virtual machine hardware that the guest operating system runs on; and optional guest additions that make the guest perform better on virtualized systems.

At the time KVM was introduced, Xen was the de facto open source hypervisor. Since Xen was introduced before the virtualization extensions were available on x86, it had to use a different design. First, it needed to run a modified guest kernel in order to boot virtual machines. Second, Xen took over the the role of the host kernel, relegating Linux to only manage I/O devices as part of Xen’s special “Dom0” virtual machine. This meant that the system couldn’t truly be called a Linux system — even the guest operating systems were modified Linux kernels with (at the time) non-upstream code.

Kivity started KVM development while working at Israeli startup Qumranet to fix issues with the Xen-related work the company was doing. The original Qumranet product idea was to replicate machine state across two different VMs to achieve fault tolerance. It was soon apparent to the engineers at Qumranet that Xen was too limiting and a poor model for their needs. The virtualization extensions were about to be introduced in AMD and Intel CPUs, so Kivity started a side-project, KVM, that was based on the new hardware virtualization specifications and would be used as the hypervisor for the fault-tolerance solution.

Development model

Since the beginning, Kivity wrote the code with upstreaming it in mind. One of the goals of the KVM model was as much reuse of existing functionality as possible: using Linux to do most of the work, with KVM just being a driver that handled the new virtualization instructions exposed by hardware. This enabled KVM to gain any new features that Linux developers added to the other parts of the system, such as improvements in the CPU scheduler, memory management, power management, and so on.

This model worked well for the rest of the Linux ecosystem as well. Features that started their life with only virtualization in mind began being useful and widely-adopted in general use cases as well, like transparent huge pages. There weren’t two separate communities for the OS and for the VMM; everyone worked as part of one project.

Also, management of the VMs would be easier as each VM could be monitored as a regular process — tools like top and ps worked out of the box. These days, perf can be used to monitor guest activity from the host and identify bottlenecks, if any. Further chipset improvements will also enable guest process perf measurement from the host.

The other side of KVM was in user space, where the machine that is presented to the guest OS is built. kvm-userspace was a fork of the QEMU project. QEMU is a machine emulator — it can run unmodified OS images for a variety of architectures that it supports, and emulate those architecture’s instructions for the host architecture it runs on. This is of course very slow, but the advantage of the QEMU project was that it had quite a few devices already emulated for the x86 architecture — such as the chipset, network cards, display adapters, and so on.

What kvm-userspace did was short-circuit the emulation code to only allow x86-on-x86 and use the KVM API for actually running the guest OS on the host CPU. When the guest OS performs a privileged operation, the CPU will exit to the VMM code. KVM takes over; if it can service the request itself, it would do so, and give control back to the guest. This was a “lightweight exit”. For requests that the KVM code can’t serve, like any device emulation, it would defer to QEMU. This implied exiting to user space from the host Linux kernel, and hence this was called a “heavyweight exit”.

One of the drawbacks in this model was the maintenance of the fork of QEMU. The early focus of the developers was on stabilizing the kernel module, and getting more and more guests to work without a hitch. That meant much less developer time was spent on the device emulation code, and hence the work to redo the hacks to make them suitable for upstream remained at a lower priority.

Xen too used a fork of QEMU for its device emulation in its HVM mode (the mode where Xen used the new hardware virtualization instructions). In addition, QEMU had its own non-upstream Linux kernel accelerator module (KQEMU) for x86-on-x86 that eliminated the emulation layer, making x86 guests run faster on x86 hardware. Integrating all of this required a maintainer who would understand the various needs from all the projects. Anthony Liguori stepped up as a maintainer of the QEMU project, and he had the trust of the Xen and KVM communities. Over time, in small bits, the forks were eliminated, and now KVM as well as Xen use upstream QEMU for their device model emulation.

The “do one thing, do it right” mantra, along with “everything is a file”, was exploited to the fullest. The KVM API allows one to create VMs — or, alternatively, sandboxes — on a Linux system. These can then run operating systems inside them, or just about any code that will not interfere with the running system. This also means that there are other user-space implementations that are not as heavyweight or as featureful as QEMU. Tools that can quickly boot into small applications or specialized OSes with a KVM VM started showing up — with kvmtool being the most popular one.

Developer Interest

Since the original announcement of the KVM project, many hackers were interested in exploring KVM. It helped that hacking on KVM was very convenient: a system reboot wasn’t required to install a new VMM. It was as simple as re-compiling the KVM modules, removing the older modules, and loading the newly-compiled ones. This helped immensely during the early stabilization and improvement phases. Debugging was a much faster process, and developers much preferred this way of working, as contrasted with compiling a new VMM, installing it, updating the boot loader, and rebooting the system. Another advantage, perhaps of lower importance on development systems but nonetheless essential for my work-and-development laptop, was that root permissions were not required to run a virtual machine.

Another handy debugging trick that was made possible by the separation of the KVM module and QEMU was that if something didn’t work in KVM mode, but worked in emulated mode, the fault was very likely in the KVM module. If some guest didn’t work in either of the modes, the fault was in the device model or QEMU.

The early KVM release model helped with a painless development experience as well: even though the KVM project was part of the upstream Linux kernel, Kivity maintained the KVM code on a separate release train. A new KVM release was made regularly that included the source of the KVM modules, a small compatibility layer to compile the KVM modules on any of the supported Linux kernels, and the kvm-userspace piece. This ensured that a distribution kernel, which had an older version of the KVM modules, could be used unchanged by compiling the modules from the newest KVM release for that kernel.

The compatibility layer required some effort to maintain. It needed to ensure that the new KVM code that used newer kernel APIs that were not present on older kernels continued to work, by emulating the new API. This was a one-time cost to add such API compatibility functions, but the barrier to entry for new contributors was significantly reduced. Hackers could download the latest KVM release, compile the modules against whichever kernel they were running, and see virtual machines boot. If that did not work, developers could post bug-fix patches.

Widespread adoption

Chip vendors started taking interest and porting KVM to their architectures: Intel added support for IA64 along with features and stability fixes to x86; IBM added support for s390 and POWER architectures; ARM and Linaro contributed to the ARM port; and Imagination Technologies added MIPS support. These didn’t happen all at once, though. ARM support, for example, came rather late (“it’s the reality that’s not timely, not the prediction”, quipped Kivity during a KVM Forum keynote when he had predicted the previous year that an ARM port would materialize).

Developer interest could also be seen at the KVM Forums, which is an annual gathering of people interested in KVM virtualization. The first KVM Forum in 2007 had a handful of developers in a room where many discussions about the current state of affairs, and where to go in the future, took place. One small group, headed by Rusty Russell, took over the whiteboard and started discussions on what a paravirtualized interface for KVM would look like. This is where VIRTIO started to take shape. These days, the KVM Forum is a whole conference with parallel tracks, tens of speakers, and hundreds of attendees.

As time passed, it was evident the KVM kernel modules were not where most of the action was — the instruction emulation, when required, was more or less complete, and most distributions were shipping recent Linux kernels. The focus had then switched to the user space: adding more device emulation, making existing devices perform better, and so on. The KVM releases then focused more on the user-space part, and the maintenance of the compatibility layer was eased. At this time, even though the kvm-userspace fork existed, effort was made to ensure new features went into the QEMU project rather than the kvm-userspace project. Kivity too started feeding in small changes from the kvm-userspace repository to the QEMU project.

While all this was happening, Qumranet had changed direction, and was now pursuing desktop virtualization with KVM as the hypervisor. In September 2008, Red Hat announced it would acquire Qumranet. Red Hat had supported the Xen hypervisor as its official VMM since the Red Hat Enterprise Linux 5.0 release. With the RHEL 5.4 release, Red Hat started supporting both Xen and KVM as hypervisors. With the release of RHEL 6.0, Red Hat switched to only supporting KVM. KVM continued enjoying out-of-the box support in other distributions as well.

Present and future

Today, there are several projects that use KVM as the default hypervisor: OpenStack and oVirt are the more popular ones. These projects concern themselves with large-scale deployments of KVM hosts and several VMs in one deployment. These come with various use cases, and hence ask of different things from KVM. As guest OSes grow larger (more RAM and virtual CPUs), they become more difficult to live-migrate without incurring too much downtime; Telco deployments need low latency network packet processing, so realtime KVM is an area of interest; and faster disk and network I/O is always an area of research. Keeping everything secure and reducing the hypervisor footprint are also being worked on. The ways in which a malicious guest can break out of its VM sandbox and how to mitigate such attacks is also a prime area of focus.

A lot of advancement happens with new hardware updates and devices. However, a lot of effort is also spent in optimizing the current code base, writing new algorithms, and coming up with new ways to improve performance and scalability with the existing infrastructure.

For the next ten years, the main topics of discussion may well not be about the development of the hypervisor. More interesting will be to see how Linux gets used as a hypervisor, bringing better sandboxing for running untrusted code, especially on mobile phones, and running the cloud infrastructure, by being pervasive as well as invisible at the same time.

From: http://lwn.net/Articles/705160/