While looking for an actual pastebin tool, I came across this guy.
It looks more interesting when you're showing extensive code that could use better syntax highlight and a nice darker background.
Leaving it here for reference to myself and others.
Sysadmin Journal
segunda-feira, 25 de janeiro de 2016
quarta-feira, 30 de dezembro de 2015
Extreme User Experience
This is a joke, not a post or opinion.
The page/interface below has everything it needs including a link to hook you up. Except the most basic :
What are they talking about? What do they call paper?? Dictionary definition? A4?
It fascinates me how much people, time and people's time "Internet Companies" are spending today on this subject.
What everyone talks about is "Product" and "User Experience". Which apparently even became a career.
I could go on for hours on how much attention has been given to this area, but it is a pretty good joke that after all, It did it : defeated the purpose of itself.
If I'm Homer Simpson, "Paper" is sold to me -- it has everything to convince me that it's exactly what I need, except one thing - I have no idea of what it is.
domingo, 22 de fevereiro de 2015
Deploying ChefDK into Fedora 20
Just writing a quick post to document something that I only found in a really hidden email thread in the Opscode lists.
I attended a really nice Chef Training today, that covered pretty much most of what is needed for cookbook testing and the main (and only) toolkit was ChefDK.
Getting home I was pretty excited and decided to try it to develop some useful cookbooks. Found most of what I needed here.
But when trying to run, I bumped in a number of errors concerning Ruby gems. Later on I figured out in the mailing list (while googling the error) that this needs to be done :
echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile
Looks like that command spits the right environment with all the libraries and binaries in the right place. Once that's on your session's environment, everything should work. At least the Vagrant provisioner worked for me.
Later on I learnt that these steps are the "Documentation" linked in the page, but I found that maybe the "Deployment Procedure" could be more explicit (and somewhat decoupled on "how to use it" documentation).
So hopefully this will provide one more helpful result to google searches of people not seeing that at first.
==== Installing the Docker plugin ====
Installing plugins for Kitchen (main tool of the ChefDK) is pretty much installing the correspondent ruby module. It already comes with "test-kitchen" and "kitchen-vagrant". To support Docker you want to install "kitchen-docker"
There's one thing to note though -- that you have to eval the chef environment, that means that a number of things are not really in the system default locations, but on the ChefDK bootstrap. I learnt the hard way that simply "gem install kitchen-docker" doesn't help. In order to have your Kitchen well integrated into Docker you need to do this, only after you do the eval described above and load the right environment :
chef gem install kitchen-docker
On the same way, you can always "chef gem list" before and after you run that, just to make sure that things are in place (or not).
That should be it. Feel free to comment if there are any questions.
I attended a really nice Chef Training today, that covered pretty much most of what is needed for cookbook testing and the main (and only) toolkit was ChefDK.
Getting home I was pretty excited and decided to try it to develop some useful cookbooks. Found most of what I needed here.
But when trying to run, I bumped in a number of errors concerning Ruby gems. Later on I figured out in the mailing list (while googling the error) that this needs to be done :
echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile
Looks like that command spits the right environment with all the libraries and binaries in the right place. Once that's on your session's environment, everything should work. At least the Vagrant provisioner worked for me.
Later on I learnt that these steps are the "Documentation" linked in the page, but I found that maybe the "Deployment Procedure" could be more explicit (and somewhat decoupled on "how to use it" documentation).
So hopefully this will provide one more helpful result to google searches of people not seeing that at first.
==== Installing the Docker plugin ====
Installing plugins for Kitchen (main tool of the ChefDK) is pretty much installing the correspondent ruby module. It already comes with "test-kitchen" and "kitchen-vagrant". To support Docker you want to install "kitchen-docker"
There's one thing to note though -- that you have to eval the chef environment, that means that a number of things are not really in the system default locations, but on the ChefDK bootstrap. I learnt the hard way that simply "gem install kitchen-docker" doesn't help. In order to have your Kitchen well integrated into Docker you need to do this, only after you do the eval described above and load the right environment :
chef gem install kitchen-docker
On the same way, you can always "chef gem list" before and after you run that, just to make sure that things are in place (or not).
That should be it. Feel free to comment if there are any questions.
sábado, 9 de agosto de 2014
Straightforward Docker for DevOps
Warning - this post is assuming that you have basic-tutorial-level knowledge of Docker, if you don't, please spend 10 min. there. I swear, it only takes that long.
Technology has been evolving faster than the usual in the last years. If your infrastructure keeps you busy enough that it's not an optimal usage of time to read multiple endless articles, see different one hour talks about new technologies, this is my short summary about what is relevant to help you use the advantages of Docker, without the hassle of having to learn the new-suggested-solution-of-the-universe way to operate described in the documentation.
The main advantage of docker is to do pretty much the same as a virtual machine, but not having as much overhead. Think of it as the good and old BSD Jail. It is also much faster to start a Docker container than to start a virtual machine. It tries to isolate one process per container, that will be for example your web application deployed in an Apache.
One scenario we can easily imagine where one wants to test puppet modules inside vagrant boxes inside of Jenkins. By the time that I tried that, the vagrant plugin was badly broken, so I thought of having a closer look in Docker, as there was also a Docker plugin.
Some useful concepts/general tips :
I think that this list is good enough, so let's proceed to the last part where I actually show how to do the VM-like interactive session and then use what you will do in the test workflow, non-interactively.
Although a lot of people say that it's a sin to run docker as root, I think it's ok to do so, depending on the context. In a test/dev environment surely, so you will miss your dear sudo, sorry :
~# docker run -t -i samircury/centospuppet /bin/bash
Attention on the -t -i flags.
This will give you a terminal on your docker container, go there, do many customizations such as "yum install gcc expat-devel", configure CPAN, everything. Make sure that everything will go fine until you are able to run your test. To start getting concrete, what I want to do is that "cpan HTCondor::Queue::Parser" installs fine.
What I like to do on this part, is go to the very end, installing dependencies, see that the module installs properly, then exit, not commit any changes. By then I will know all depencies as I wrote them down somewhere.
Next step will be to get a bash again, install only what I need for the test, exit, run :
~# docker ps -l # will list your
~# docker commit a3e333a99b231f56 samircury/centospuppet
Ok, now we're all set for the test. Let's drop the -t -i flags, run it non-interactively and see the magic happening :
[root@darkstar ~]# docker run samircury/centospuppet /usr/bin/cpan HTCondor::Queue::Parser
Technology has been evolving faster than the usual in the last years. If your infrastructure keeps you busy enough that it's not an optimal usage of time to read multiple endless articles, see different one hour talks about new technologies, this is my short summary about what is relevant to help you use the advantages of Docker, without the hassle of having to learn the new-suggested-solution-of-the-universe way to operate described in the documentation.
The main advantage of docker is to do pretty much the same as a virtual machine, but not having as much overhead. Think of it as the good and old BSD Jail. It is also much faster to start a Docker container than to start a virtual machine. It tries to isolate one process per container, that will be for example your web application deployed in an Apache.
One scenario we can easily imagine where one wants to test puppet modules inside vagrant boxes inside of Jenkins. By the time that I tried that, the vagrant plugin was badly broken, so I thought of having a closer look in Docker, as there was also a Docker plugin.
Some useful concepts/general tips :
- You will find a raw image of the distro you like, and will have to build your env a bit on top of that. It's ok, do that, docker commit and you will have what you need ready to use.
- I found mainly 2 ways to use it
- Interactive session, if you use VMs for validation/testing you're at home
- Run a container to execute "a job" and exit, get the exit code and do something with it.
- For example, "cpan install My::Perl::Module" and make sure all goes fine (0)
- There is a third usage that I will let aside - run a service such as a webserver inside the container, leave it running for long, for other purposes.
- All changes need to be commited, otherwise they get lost. This is good and bad.
- Get used to divide the bigger task in chunks, so you commit small chunks and can revert if needed
- Basically aim to have a checkpoint RIGHT BEFORE running your test.
I think that this list is good enough, so let's proceed to the last part where I actually show how to do the VM-like interactive session and then use what you will do in the test workflow, non-interactively.
Although a lot of people say that it's a sin to run docker as root, I think it's ok to do so, depending on the context. In a test/dev environment surely, so you will miss your dear sudo, sorry :
~# docker run -t -i samircury/centospuppet /bin/bash
Attention on the -t -i flags.
This will give you a terminal on your docker container, go there, do many customizations such as "yum install gcc expat-devel", configure CPAN, everything. Make sure that everything will go fine until you are able to run your test. To start getting concrete, what I want to do is that "cpan HTCondor::Queue::Parser" installs fine.
What I like to do on this part, is go to the very end, installing dependencies, see that the module installs properly, then exit, not commit any changes. By then I will know all depencies as I wrote them down somewhere.
Next step will be to get a bash again, install only what I need for the test, exit, run :
~# docker ps -l # will list your
~# docker commit a3e333a99b231f56 samircury/centospuppet
Ok, now we're all set for the test. Let's drop the -t -i flags, run it non-interactively and see the magic happening :
[root@darkstar ~]# docker run samircury/centospuppet /usr/bin/cpan HTCondor::Queue::Parser
--suppressed-extremely-long-cpan-output--
Installing /usr/local/share/perl5/HTCondor/Queue/Parser.pm
Installing /usr/local/share/man/man3/HTCondor::Queue::Parser.3pm
Appending installation info to /usr/lib64/perl5/perllocal.pod
SAMIRCURY/HTCondor-Queue-Parser-0.04.tar.gz
/usr/bin/make install -- OK
Installing /usr/local/share/perl5/HTCondor/Queue/Parser.pm
Installing /usr/local/share/man/man3/HTCondor::Queue::Parser.3pm
Appending installation info to /usr/lib64/perl5/perllocal.pod
SAMIRCURY/HTCondor-Queue-Parser-0.04.tar.gz
/usr/bin/make install -- OK
Warning (usually harmless): 'YAML' not installed, will not store
persistent state
[root@darkstar ~]# echo $?
0
Now an interesting observation, you saw that $? was 0 -- everything was fine, but if I use the standard CentOS image :
[root@darkstar ~]# docker run centos /usr/bin/cpan HTCondor::Queue::Parser
2014/08/10 00:02:31 exec: "/usr/bin/cpan": stat /usr/bin/cpan: no such
file or directory
[root@darkstar ~]# echo $?
1
I won't pollute here with all the evidence, but I did extensive testing and I found very nice that the exit code of your non-interactive command is propagated to the exit code of the "docker" command itself. In other words, in Jenkins I wouldn't even need a Docker plugin, if I used this exit code instead.
I hope this helps people to get to the point where they can use Docker for something useful faster.
persistent state
[root@darkstar ~]# echo $?
0
Now an interesting observation, you saw that $? was 0 -- everything was fine, but if I use the standard CentOS image :
[root@darkstar ~]# docker run centos /usr/bin/cpan HTCondor::Queue::Parser
2014/08/10 00:02:31 exec: "/usr/bin/cpan": stat /usr/bin/cpan: no such
file or directory
[root@darkstar ~]# echo $?
1
I won't pollute here with all the evidence, but I did extensive testing and I found very nice that the exit code of your non-interactive command is propagated to the exit code of the "docker" command itself. In other words, in Jenkins I wouldn't even need a Docker plugin, if I used this exit code instead.
I hope this helps people to get to the point where they can use Docker for something useful faster.
domingo, 3 de agosto de 2014
Yet Another Zabbix review
This will be yet another zabbix review. But I feel like it is worth to document and show some very nice things that I managed to do in my environment in very little time using this tool. Not that can't be done with similar tools, but Zabbix seems to have the advantages of many concentrated in one, and everything you want to do is pretty natural and most of times won't need documentation support to get done. You just figure out as you need features.
However I'll will try to be complementary to other ones I have read and focus on the features I used it and how interesting they are. The overview is that it does availability monitoring as good as Nagios would and it helps you to not need Cacti as much because nearly anything that it monitors can be plotted. It is not as precise as Cacti though.
I will start mentioning one of the most interesting features that I have not seen in other software -- it discovers your Network. This will be useful if you don't want to register every single node by hand or if you want to detect intruders. Worth noting that although I never heard of it in Nagios it is a very common feature on proprietary tools, I heard from colleagues working in places that are willing to afford those.
One can also control how often the discovery runs so your hosts don't become overloaded. This is specially necessary on big enough networks. Mine with 300+ hosts needed some tuning. In addition to that there is an alternative setup that whenever zabbix agents start in hosts you want to monitor they get automatically added interest groups X, Y or Z depending on some rules you define on the given registering action.
This is also possible with normal discovery but the set of parameters you can play with is reduced, as discovery is based on ICMP or basic services open ports.
The best setup I found here was to discover hosts on the public network and to let the hosts in the private network to add themselves as zabbix agents start, right after Puppet configures them automatically. They are also added to a special host group that already gets the monitoring templates I want automatically.
So this should tell you how easy will be to set up all hosts you want to monitor at first. Now the interesting features. Zabbix agent monitors many aspects of your system automatically already. It also allows you to request for additional parameters to be monitored for example if a process with name like X is running or if the port Y is open and so on. No SSH checks needed for that :-)
As well as Nagios it supports SSH checks. It has support to SSH keys with passphrases which will increase slightly the security of your systems.
There is a fundamental difference, while nagios expects an exit code from the script as the check status, Zabbix expects a value which will be an integer, a string, a Boolean and then make a decision later about this value, for example if value is bigger than 10, trigger an alarm.
I should mention that every single metric it collects from a host can become a historical plot. Also correlating these plots is very easy. For example, if I want to look at how to values correlate I can plot them together and see how they evolve. I could also the two values separately or add them up, even more complex functions are possible to implement for the end-result. You can also create aggregated plots for an entire host group.
A real example is that I have a pool of transfer servers and I want to know in total what are the transfer rates. For that all I do is to configure an aggregated plot for the entire host group for the transfer rate of the public network interfaces.
It also has this very straightforward architecture of proxies if you want to monitor remote networks that you don't have direct access to. For basic setups nearly the default configuration will be enough.
Then the proxy will be able to though SSH checks on the remote network for you, or receive metrics from all the hosts on that private network and forward them to the Zabbix server.
Another aspect I like is that the Template system is really good. You can assign monitoring templates to given host groups, and have changes applied on all hosts. You can very quickly create your custom ones containing :
- Metrics to monitor (SSH checks included)
- Triggers (alarm) for these metrics
- Plots for these metrics
- Probably other things I'm forgetting now
That's it for now, but I will update the post (or create a new one) with some screenshots to illustrate all I'm saying here.
Start and goals
For the ones who don't know me I'm a systems administrator who works with high-performance computing for about seven years. My focus used to be monitoring and development of tools to improve it when needed but lately I have been some work with high-speed networks, at the scale of 100 Gbps and maybe some posts about this at coming up.
I'm trying to make this an interesting place so I'm avoiding to post every single thing that happens. I'm filtering to only subjects that are worth sharing.
It is also interesting to myself as a journal of the subjects that are worth keeping a record of, so I can refer to it later when discussing with people.
My interests in the near future are distributed filesystems, OpenStack – it's trade-offs on small environments, continuous integration, usual DevOps topics without all the hype, some skepticism instead and maybe some remarks about approach and how this can generate time waste. A side topic that is not totally related but interests me a lot is techniques for time management.
PS: I tried to name the blog as "sysadminjournal" but not surprisingly this subdomain was not available. C'est la vie, we use this one instead.
I'm trying to make this an interesting place so I'm avoiding to post every single thing that happens. I'm filtering to only subjects that are worth sharing.
It is also interesting to myself as a journal of the subjects that are worth keeping a record of, so I can refer to it later when discussing with people.
My interests in the near future are distributed filesystems, OpenStack – it's trade-offs on small environments, continuous integration, usual DevOps topics without all the hype, some skepticism instead and maybe some remarks about approach and how this can generate time waste. A side topic that is not totally related but interests me a lot is techniques for time management.
PS: I tried to name the blog as "sysadminjournal" but not surprisingly this subdomain was not available. C'est la vie, we use this one instead.
Assinar:
Postagens (Atom)