Post useful scripts for Windows/Linux that you've written!
I was working on one last night that uses nbtscan (NetBIOS scan, Win/Linux) to scan for all computers on a network, show their workgroup, and mac address. This way I can easily tell how many computers are NOT joined to the domain like they should be.
It computes the network range/CIDR from your IP and subnet and then runs nbtscan and then parses the output. I could also run nmap the same way.
I just went back and used "here documents" to embed the supporting AWK scripts into the bash script. Pretty fun stuff.
Output is:
Using default, hardcoded interface wlan0. Using interface [wlan0] ip=192.168.1.175 netmask=255.255.255.0 network=192.168.1.0 192.168.1.0 Sendto failed: Permission denied 192.168.1.255 Sendto failed: Permission denied range=24 cidr=192.168.1.0/24 192.168.1.196 - HPD1AA8C - MSHOME - 00:00:00:00:00:00 192.168.1.147 - TITAN - WORKGROUP - 00:af:86:22:11:a7 192.168.1.175 - SATURN - WORKGROUP - 00:00:00:00:00:00
Basically, you just run the script (optionally give it the interface name you want to use, wlan0 is default) and it'll grab your network information, compute a proper CIDR / network range (without needing another app), and then pipe that to nbtscan. I might add nmap later. Previously, I used ipcalc (another apt-get package) to produce the CIDR but it was fun replacing it with another awk script.
I have a post-install script that adds a basic set of tools on an Ubuntu or Linux Mint. Warning, this installs PHP 7.
https://gist.github.com/derrekbertrand/7668a695911260dee0c8
Mounts a folder on a remote host using sshfs, pokes it every minute to make sure the connection doesn't break. Honestly, I've stopped having a need for this, as I've changed workflows.
https://gist.github.com/derrekbertrand/8911d178ebe23b15670c
I also have this which provisions an account on a server with nginx for the Laravel framework (mostly). Mainly for my internal use.
I need to learn awk someday. That's one tool that I'm lacking. All I know is "{print $1}" where $1 is a positional field in the input. I do know Perl though which is probably more powerful, featureful, and clean; so I would generally rely on that, but having awk would be nice when perl isn't available.
When possible, you should prefer [[ over [ because it's smarter (less quoting required). I would generally quote the right-hand side of a variable assignment unless I know it can't possibly contain white-space (e.g., static value). In the case of a variable or a subcommand I'd quote it just to be safe. Generally with UNIX-like scripting language quotes nest properly without thinking too hard about it. In Windows, I'm sure you're aware that it's a brainfuck...
You just taught me about here-strings!
I don't particularly have any scripts in mind at this time. Much of my scripting ends up in my rc repo: https://github.com/bambams/rc (i.e., .bash.d.source, .bash.d). Depending on context I sometimes put them into other repos. For example, git and mercurial scripts have their own repos. Those tend to be very ugly and hacky scripts that go stale if they aren't practical.
Here's an example of a simple script that defines a function which allows me to interact with wifi connection settings from a machine that doesn't have automatic wifi setup. It was written for my EeePC netbook running a basic Debian system that booted into text mode where I'd start X manually running xmonad as the window manager. With some magic in /etc/sudoers I was allowed to change a symlink in /etc/network/interfaces.d without a password which is where I'd manually add configuration for networks I knew about and used (generally, only home, and places I'd spend the night).
Since I am always working from a command line I generally wrap repetitive tasks into a smart command such as this so that I can type a lot less and don't have to remember the intricate details.
I need to learn awk someday. That's one tool that I'm lacking.
Same here. I've always been able to get by with bash, Python, and of course non-awk coreutils... I know there are a few sed/grep lines I've written that would have been more elegant with awk.
Maybe someday.
I have no scripts good for sharing, but I would like to share a simple tip: Check out the zenity, dialog, and osd_cat packages of your distribution.
I always put off awk till last night. It's actually super easy and has a C-like syntax. You have:
START {}
{}
END {}
Where start and end are optional header/footer blocked executed once before and after. Otherwise, everything is executed per field. You can set up the field-separator (FS) to be whitespace (default), or commas or dots, etc.
There are some strange things. $0 is the whole line. $1 is the first field. $2 the second. HOWEVER, variables do NOT have dollar signs like bash. Dollar signs are only for special fields. I kept running into problems with that initially. NF is number of fields, hence the for loop between 1 and the number of fields (0 being ignored because it's all fields.)
Disclaimer: I've only started learning it so there may be better ways, or incorrect nomenclature used.
Nitpick mode activated. I realize that you didn't necessarily write all of this, and it's purpose is limited so the code doesn't have to be perfect, but we're here so I figure we might as well point out improvements that can be made.
To make it so that you don't have to modify the source itself you can make those environment variables inherited from the environment, and defaulted if unset. That's how I'd prefer to do it. That way the source doesn't have to change at all and the actual "config" can be put in a separate file sourced by your shell.
VARNAME="${VARNAME:-default_value}";
If they aren't environment variables then it's probably a better idea to make them lowercase so that they'll be less likely to collide, and they'll stand out differently too.
VARNAME=value; export VARNAME # vs varname=value;
There's basically no value in doing this (unless maybe you're going to modify PATH and want to make sure it doesn't screw with things). An absolute path can be useful for processes running with elevated privileges, but only if the absolute path is known. Querying the system for the absolute path is no different than just letting the system resolve it.
Note that this relies on $ being literal because of what follows. It's probably acceptable, but it made me question it. Using a strategy like Chris did to interpolate parts, especially repeating ones, might make it easier to understand. This is a pretty common way to simplify regular expressions.
wrdchr_re="a-zA-Z0-9"; bndchr_re="[${wrdchr_re}]"; midchr_re="[${wrdchr_re}\-]" domprt_re="(${bndchr_re}|${bndchr_re}${midchr_re}*${bndchr_re})"; domain_re="^(${domprt_re}\.)*${domprt_re}\$";
I chose somewhat obscure names just to make them short to keep the composition in a single line/terminal.
Unless you're targeting various shells with varying capabilities you can probably reduce this to:
read -p "Would you like use ssl on this site (y/N)?" USESSL
Alternatively, I'd at least use "echo -n" so that the response was on the same line as the prompt.
The thing that should be quoted, $USESSL, is not and the thing that doesn't need to be quoted, the y, is. Or just use [[ and neither needs to be quoted.
Lastly, I'd reiterate quoting of variable expressions and subcommands.
Append:
Thanks, Chris! That pretty much takes the mystery out of the first awk program. I'm speculating that what looks like array references in the latter is actually storing data into a dictionary, but I'm not sure? It seems like it would help an awful lot to understand those awk programs if you began them with a comment describing the input format as best as you understand it, or even offer a short sample input for reference. For somebody familiar with the input source it's probably not very necessary, but since I've never used nbtscan I have no idea what to expect.
@bams: No, I only wrote like 20% of that. The original was old and did the job, but I wanted a little more functionality out of it, and I wanted to tweak some things to suit my needs. More importantly, with some fidgeting, it will properly set up an SSL cert.
So yeah, If I were to rewrite it myself, I'd clean it up substantially. But you can't complain about free shit on the internet.
EDIT:
Now that I'm looking closer, the latter half of your post was written by me. Lets just put it out there that bash is not my strong suit.
But you can't complain about free on the internet.
Disagreed.
Lets just put it out there that bash is not my strong suit.
Bash (and sh-variants in general) tends to be something that people don't start out caring about, but eventually their usefulness works their way into your life. I gradually developed a somewhat intermediate level of skill hacking in bash. If you do choose to do any hacking in bash I recommend joining #bash on freenode to ask for criticisms or help. They are extremely helpful, extremely wise, have a ton of readily linkable best practices at their disposal, and they can seriously improve the quality of your bash code and give you a lot more confidence in it.
I also have experience being put onto a system with an ancient sh variant. You don't know how much you'll miss the bells and whistles until they're gone. It was still refreshing to see how many features were still supported even then, but it was still a bit of a fight to get comfortable with it.
Another thing I learned last night:
apt-get install most export PAGER=most
most lets you use color highlighting in man pages!
[edit]
Okay, so the second awk. awk supports all kinds of crazy hash-based arrays. You can give it any value and it translates with a hash function to an index. This was actually the first awk program I made with arrays too.
The begin sets the FS (field separator) to comma.
Then, for each line:
(field and value I think are old versions as I was debugging.)
data is indexed with IP values that are auto-converted with a hash.
data[192.168.32.1]["ip"] = 192.168.32.1
then if nbtscan
So we then take each IP and based on the third column, we decide what to do with it. If it's workstation service, it's the computer name. Domain Name is the domain name / work group. However, if the SECOND column is MAC, we use the third column for the data. All straight forward stuff just ripping the data from the nbtscan output.
So for each line, we throw the data into a data structure. And then when we're done, we just dump it all out in a tabular format.
And that first AWK is just taking a subnet string (delimited by periods) and converting each field into to bits taking advantage of the fact you can't have any zeros inbetween ones in a subnet mask (11111000, never 01001101) so we don't really have to convert to binary and can just match the 8 possible cases for each field. [Of course, this is IIRC. If subnet masks CAN have zeros in the middle this will break apart.]
[edit] I actually wasn't sure whether to comment the script as a kind of tutorial or not. I could add some more if people really want it.
Before you all start changing PAGER to most: http://unix.stackexchange.com/a/81131. The Web site for it is incredibly bare. Most ... of it is "under construction". It's basically undocumented from the looks of things (note: I haven't installed it). In any case, less is pretty featureful already so I'm not really in need of a better pager. And apparently most does lack a few less features. Still, thanks for spreading the knowledge, Chris. I'll be watching for it next time it comes up to see if the game has changed. In the meantime, as with probably most people, LESS is set to FRSX. If yours isn't Google why maybe it should be.
Thanks for the explanation of the 2nd awk program. It makes sense. It also helps me to see where inspiration for some Perl features may have come from.
As for subnets I think it's guaranteed to be 1's and then 0's (never mixed). The subnet is supposed to tell you which bits of the address are network address versus host address. And it wouldn't make sense to mix them up.
I just realized yesterday that awk can actually do much more:
BEGIN {} pattern{} pattern{} ... END{}
Where pattern can actually be a regular expression:
/taco/{}
Or a variety of functions:
match($0, "/taco/", x) #check whole line, $0, could also have done only $1/$2/etc #regex #x - the index of the first match { print "found at", x }
Also, NR is the number of records... when it's in END. But in the normal lines? It tells you WHICH line number you're on.
Both of those really let you get up-and-running with logic for each label very quickly instead of having to do some more complex, general "String compare" switch statement.
Awk can do many things. There's nothing it can't do, or nothing that would surprise me, after seeing someone make a raycaster with it...
But of the good bits of Unix, I think shell scripts are... terrible? They're a great idea, but the implementation is straight out of the 80s at best (and not in a good way).
It probably wouldn't be too difficult to make a simple shell that leverages a powerful scripting language for personal use. For example, I somewhat like the idea PowerShell (programs/actions being objects), but the syntax is even worse than any popular Unix shell... I've thought about it, but I can barely get out of bed so it's definitely out of my league right now.
(A shell that properly embraces and extends my favorite scripting language, Lua, would be pretty neat...)
The problem with that is that a shell using a proper scripting language would be a bitch to use manually (i.e. entering commands in the terminal). Imagine using the Python or Node.JS interpreters as an everyday shell! Unix shells essentially need to serve two masters: They need to be Turing complete so you can automate anything with a shell script, but also have a compact syntax so manual commands can be entered quickly.
Besides, you can already do what you want by adding a shebang to the top of the script so that the OS will run it using the proper interpreter. For example, you could write a Python script starting with #!/usr/bin/python and run that like you would any other shell script. Since you specifically mention personal use, installing Python (or Lua, or whatever) shouldn't be an issue.
On the contrary, Unix shell languages are timeless. They're more likely to be straight out of the 70s with most features and the 60s for others, but I digress. This was at a time when they understood well how best to communicate with computers. It's pretty interesting to see just how many things they got right compared with future generations and how their ideas from those origin days still remain the best ideas today.
PowerShell sounds like a great idea in theory. Communicate with objects? How cool! The idea really breaks down when you realize how complicated that makes everything. The brilliance of the standard stream interface is that any two programs can talk to each other regardless of whether or not they were designed to do so. In fact, they could have been written in completely isolated systems without ever knowing of each other's existence, and later you can interface them together easily either directly (a | b) or indirectly (a | c | b), if necessary.
My understanding of the PowerShell interface is that programs essentially need to be explicitly written with specific object interfaces in and out. While nothing should theoretically prevent you from writing that c program to transform one object into another one, consider the verbosity of statically-typed OOP and how that's supposed to fit onto a command line!
In practice, c is usually not some custom program a_to_b, but rather it's awk ..., sed ... or perl ... oneliners hacked right there on the command line in 30 to 60 characters or so. Certainly more lengthy solutions exist, and often they are indeed written to disk as a separate command (or module), but the power exists because the communication interface for every program is a stream of bytes (and that stream could very well represent an object, a file, text, or anything else).
Since I am predominantly a command-line user I always operate from a command shell when I can, even in Windows. A few years back I made the conscious choice to just bite the bullet and switch from cmd.exe to PowerShell. And I spent a good 6 months exclusively in PowerShell trying to figure out how to use it effectively (note: cmd.exe has always been pretty terrible so I was very anxious for something better). I still had MinGW/MSYS in my PATH so I was able to use the UNIX-like tools from the PowerShell console. What I found was that learning PowerShell commands and invoking them was extremely difficult compared to the UNIX-like tools I already had at my disposal. There was no benefit to learning the PowerShell commands, and in fact they lacked in power compared to what the standard shell interfaces already had. The console program (i.e., the window/emulator/etc.) itself was very disappointing in that it had no features that cmd.exe didn't have.
PowerShell cmdlets are not just a man page or Google away from figuring out some command and wiring it up. Instead, it is a painful process of trying to learn and understand an entire custom API for every command, and battling frustratingly with the fact that they just can't do certain things. IIRC, there's no proper concept of STDERR. They had some other custom error thing that worked way differently. It just made for very clumsy, looooooooong commands that barely worked. Things that you'd expect to work don't. And the shell lacked features that UNIX shells have had for nearly 50 years... I doubt that has improved much in the past 5 years. In general, it's a massive failure in my eyes.
It has been a long while since I used it or cared so it's possible that things or at least documentation has changed, but for a taste of what I mean see here: http://stackoverflow.com/questions/4998173/how-do-i-write-to-standard-error-in-powershell. The friendly PowerShell users trying to help just don't grasp the simplicity of what this guy is trying to do. Eventually, they're basically like, "well, you can't do that, ... but this is close...?"
I don't recall now where I got this idea, but I seem to recall reading that Microsoft was never setting out with the knowledge of UNIX-like shells to do a better job of it. As usual, they wanted incompatibility and vendor-lock in. Their motivation was apparently to write a brand new shell to become a standard of sorts. For embedded devices or CMOS systems or who knows what...
You'll note before PowerShell came "Windows Script Host" which allowed for "shell scripting" to be done in VBScript or JavaScript largely dependent on another Redmond atrocity, COM objects. These are terrible interfaces, despite JavaScript actually being a pretty great programming language.
I'd argue that if you find bash somehow deficient in features or clumsy then you just haven't spent enough time with it. This would be a fantastic thread to pose some questions or complaints and see if our collective knowledge can enlighten you.
Append:
Awk can do many things. There's nothing it can't do, or nothing that would surprise me, after seeing someone make a raycaster with it...
Wow. That's awesome. I tried it. It was hard. I failed.
The problem with that is that a shell using a proper scripting language would be a bitch to use manually (i.e. entering commands in the terminal). Imagine using the Python or Node.JS interpreters as an everyday shell! Unix shells essentially need to serve two masters: They need to be Turing complete so you can automate anything with a shell script, but also have a compact syntax so manual commands can be entered quickly.
I figured there would be a "lightweight" input mode similar to current shells that's suitable for most basic commands, like redirecting input and piping and all. Anything a bit more advanced would let you switch into a pseudo-REPL mode with the syntax of the language...
I often enough write one-off commands that are incredibly cumbersome to type/parse (but are so specific at that moment writing a shell script would be pointless) that having a more sane syntax would be nice... Of course, for most other tasks, a lightweight input mode would be fine, but there's a use case for a slightly more verbose, but more readable, method of entering commands.
I just started touching Powershell a few months ago. I really liked it. It was a god-send trying to debug an exchange server. Grab all messages in the queue, filter them by these fields, and send then to the screen in a nice table. It worked great. I did some other stuff too. Nothing needed hardcore conversions like my typical awk program though. And EVERYTHING was self-documenting, including the data structures.
The thing is, powershell can link to almost everything that a GUI can in windows now. COM, WMI, and .NET all can be called. Microsoft provides a unified interface, and powershell easily links up to those interfaces. It also easily returns class objects with methods, and can easily be linked up with .NET sub-programs. To Microsoft's credit, Windows has very clear boundaries for their interfaces so adding a new interface or scripting language is a simple, entirely encapsulated job.
However, to Bam's issue, scripting tends to involve badly written programs (or programs used outside their intended purpose), and using only serialized data lets you survive that (BUT NOT ALWAYS) and keep going. However, we're also missing out because we HAVE to constantly write scripts that take that serialized data, unserialize it to work on it using REGEX, and then re-serialize it for the next program. At BEST we're running either each-line-is-field, or CSV where each row is a UNIT of fields separated by commas. But what happens when a program spits out data every TWO lines? Or worse, sometimes two, and if there's more data available, THREE or more? What happens when we're dumping multi-line text with special characters? (My living nightmare happens.)
I JUST had to write an awk program to parse nmap's output for host discovery. It dumps the host name, it dumps the type of "I'm here!" acknowledgement found (ARP vs TCP, etc), and IF it finds a MAC address it dumps that too. All of that crap is inbetween lines of description text and thanks to the MAC, the number of lines per "object" is variable! They actually say DON'T parse their text output in their man page. You should actually be using... and here it is... their XML output option.
OH WAIT. How the hell would you get BASH, a language without methods and member variables (AFAIK) to parse XML?
So not only do you have this huge productivity hit from the impedience mismatch. But any data you don't intentionally capture, is lost. And since shell programs can output differing data you might not even SEE some of the edge cases when you're building your data capture script.
To me--and I'd actually be willing to put work into doing this--the ideal Bash world would have the main syntax unchanged for compatibility-reasons, but an OO extension that directly supported XML, JSON, and classes with built-in serializer/deserializers as constructors or something. (So you can write your own, but they're clearly marked as entry and exit points in the class structure.)
I see no reason why most UNIX commands shouldn't have a -JSON command. That format is not going away and it's way more legible and space-efficient than XML. Although, MOST programs don't even have an XML format. Also, I think all programs should have a SCHEMA section of the man page but Powershell is so amazing you can just store the output in to a $object and then just start poking around and reading the object's fields to learn the output.
To me, PowerShell is much more "new user" friendly. You don't need a manpage for a lot of things the way Bash scripting does because the OO-nature lets you gleam a lot more information from a proper, rigid, known architecture. "Oh. Those are the methods, those are the variables and they're all clearly named." as opposed to "Oh god, what is this gibberish text screen supposed to mean?" Every one of us has seen a 300-column CSV where you constantly have to look back at the header line to see what field it is until you eventually give up and load a GUI CSV parser. (But when Excel/OpenOffice crash on very-large CSVs, then you're stuck with strange, crappy, shareware CSV readers. That happened, it sucked.)
Lastly, one thing that surprises me is how many people consider C++ a good language, and OO proper modern methodology, but would never consider using it in Bash/command-line scripting. Now, I'm not saying it should ever be REQUIRED, but as an option, I can't think of any good reason to oppose it.
Tangentially related, but when I started using FreeBSD as my primary operating system, I only had passing experience with shells (mostly Msys on Windows, funnily enough).
So I found zsh to be greatly superior to bash without any prior biases....
It also helps zsh has incredible auto-magic-complete support, unlike bash, which helps a lot when working with certain programs (such as tar).
zsh is pretty magical. I haven't been able to get into it yet except an hour or two. I can't decide whether I like it or not though.
ALSO, I forgot to mention: How do you serialize a program when it's written so poorly that it dumps error messages to STDOUT instead of STDERR? (The horror... the horror...)
To me--and I'd actually be willing to put work into doing this--the ideal Bash world would have the main syntax unchanged for compatibility-reasons, but an OO extension that directly supported XML, JSON, and classes with built-in serializer/deserializers as constructors or something. (So you can write your own, but they're clearly marked as entry and exit points in the class structure.)
That made me think of Haskell. I found it to be very nice for scripting tasks harder than Bash one-liners. And it supports JSON serialization via the "aeson" library. (However, the syntax is quite different from Bash. But after some time, I've come to appreciate that. ^^)
Yesterday, I wrote a Haskell script that reads an HTML table with years in some lines of the first column, and groups all lines in the second column that belong to a specific year. Then, it converts every year group into a <h2> and every table line into an enumeration element. Oh, and it also reverses the order of years. And it reformats the resulting HTML code to be nicely indented.
I JUST had to write an awk program to parse nmap's output for host discovery. It dumps the host name, it dumps the type of "I'm here!" acknowledgement found (ARP vs TCP, etc), and IF it finds a MAC address it dumps that too. All of that crap is inbetween lines of description text and thanks to the MAC, the number of lines per "object" is variable! They actually say DON'T parse their text output in their man page. You should actually be using... and here it is... their XML output option.
OH WAIT. How the hell would you get BASH, a language without methods and member variables (AFAIK) to parse XML?
Nmap::Parser! Tada! Perl has you covered (untested). Seriously, considering the nature of the work that you're constantly doing you should give Perl a close look. There's tons of modules for solving these kinds of problems. The motto in Perl is to not shell out (as you might in bash), but rather to prefer CPAN modules which already do the shelling out for you and have wrapped the command in a safety layer. Not only is there in this particular case a module that already knows the schema for nmap, but there are also modules for parsing XML or JSON with varying degrees of complexity or simplicity as required for cases where no module exists. For simple cases you could do a one-liner, and for more complex cases write an actual program. Perl isn't the only such platform that will have modules like this, but it does have a long history of being a sysadmin's best friend so there will likely already be many modules for that space. And where there isn't you can create your own!
To me--and I'd actually be willing to put work into doing this--the ideal Bash world would have the main syntax unchanged for compatibility-reasons, but an OO extension that directly supported XML, JSON, and classes with built-in serializer/deserializers as constructors or something. (So you can write your own, but they're clearly marked as entry and exit points in the class structure.)
I think you're asking too much of bash. Its primary purpose is as a shell language. Invoking commands and wiring them up to the user, file system, and other commands. It's not meant to do everything the best. There are existing tools that will always do that better than bash ever could. And really, by the very nature of a command shell, you can already "extend" the bash shell by just writing new commands that do what you want. For example, write a command that allowed you to simply parse and process XML data.
I see no reason why most UNIX commands shouldn't have a -JSON command. That format is not going away and it's way more legible and space-efficient than XML.
And maybe one that lets you translate an XML structure into JSON, and a separate one that allows you to easily extract data from JSON. Google first, these probably exist.
So I found zsh to be greatly superior to bash without any prior biases.
This is the consensus. Even most people in #bash agree that zsh and fish are better designed. Nobody in his right mind, however, would argue that either is more ubiquitous--and that is where bash truly shines.
Well it appears to work. Here's a simple test script that just dumps the raw blessed objects of online "hosts". To keep things simple it separates nmap command line options from hosts (e.g., IP addresses) using the standard -- option.
bambams@sephiroth:~$ perl nmap.pl -- castopulence.org $VAR1 = bless( { 'addrs' => { 'ipv4' => '64.85.162.126' }, 'distance' => undef, 'hostnames' => [ 'castopulence.org', 'b03s17le.corenetworks.net' ], 'hostscript' => undef, 'ipidsequence' => undef, 'os' => undef, 'ports' => { 'extraports' => { 'count' => '994', 'state' => 'closed' }, 'tcp' => { '113' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'ident', 'port' => '113', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'open' }, '22' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'ssh', 'port' => '22', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'open' }, '443' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'https', 'port' => '443', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'open' }, '554' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'rtsp', 'port' => '554', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'filtered' }, '80' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'http', 'port' => '80', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'open' }, '9000' => { 'service' => { 'confidence' => '3', 'extrainfo' => undef, 'fingerprint' => undef, 'method' => 'table', 'name' => 'cslistener', 'port' => '9000', 'product' => undef, 'proto' => 'unknown', 'rpcnum' => undef, 'script' => undef, 'tunnel' => undef, 'version' => undef }, 'state' => 'open' } }, 'tcp_port_count' => 6, 'udp_port_count' => 0 }, 'status' => 'up', 'tcpsequence' => undef, 'tcptssequence' => undef, 'trace' => { 'hops' => [] }, 'trace_error' => undef, 'uptime' => undef }, 'Nmap::Parser::Host' );
Requires List::MoreUtils and Nmap::Parser modules. It also requires Perl 5.22.xx or better, but that can be lifted by removing the appropriate use line. It will fall back on whatever the used modules require. Nevertheless, to install the modules, my advice is to install perlbrew:
cpan -L http://install.perlbrew.pl | bash
Follow the directions to source the environment (and add it to bashrc for later). Then install the latest stable perl (this takes a few minutes):
perlbrew install --as stable stable
When it's finished, assuming it went gracefully, install cpanm, switch to the new local perl environment, and install the dependencies.
perlbrew install-cpanm perlbrew use stable cpanm install List::MoreUtils Nmap::Parser
You can switch perlbrew instead of use if your account doesn't depend on running administrative tasks that rely on a particular environment. To be safe, I opted for use so you wouldn't post at 2 AM screaming that servers are down and I broke your system. Note: use only modifies your current environment so logging out and in would "restore" it. With either option, you can switch/use system to go back to the original PATH perl. If your environment is more complex than this then YMMV. I recommend familiarizing yourself with this stuff on a desktop rather than a production server for safety's sake...
Is perl worth learning? I've heard much about perl being a "write only" language with a very cryptic syntax. I'm sure you don't HAVE to write it cryptic but it's more like "all code you'll encounter is written as such." On the otherhand, their regex language seems very nice and I end up using it with grep/etc all the time.
The main thing that leads to Perl appearing "cryptic" is its use of sigils (grammatic symbols on variables/expressions) to express context. Unlike most other programming languages, Perl has a concept of plural expressed in the language itself which can change the meaning of an expression; the creator is a linguist. That combined with a few other interesting syntax variations which require learning Perl to understand (i.e., unlike something like Python, you probably wouldn't be able to read it without learning the language first). For example, the following sets of two statements are equivalent:
my @results1 = map { $_ + 2 } grep { /^[0-9]+$/ } @input; my @results2 = map $_ + 2, grep /^[0-9]+$/, @input;
The first "parameter" to map and grep and family are code blocks, and they're magical in the sense that you can either pass a block (in braces) or an expression (note: no comma versus comma). You can think of both essentially turning into lambdas (anonymous functions) implicitly. So that one special expression is not evaluated and then the result passed into map or grep or friends, but instead is passed in as a chunk of code to be repeatedly called against the members of the list passed in. In Perl, a regular expression is implicitly applied to the default variable, $_. The grep code is equivalent to saying, "the current member matches this regular expression", i.e., is a positive integer. It can also be weird to a newcomer how the use of parenthesis is optional in a function call, as above. You could make the parenthesis explicit if you thought it reads better (but you'd be wrong):
my @results1 = map({ $_ + 2 } grep({ /^[0-9]+$/ } @input)); my @results2 = map($_ + 2, grep(/^[0-9]+$/, @input));
Note, still no commas between "arguments" in the block case. The code argument could also be passed explicitly as a subroutine reference (which is like a lambda). For example:
my $pos_int = sub { /^[0-9]+$/ }; my $plus_2 = sub { $_ + 2 }; my @results = map $plus_2, grep $pos_int, @input;
Whereas Python's motto is that there should only be one way to do it, as you can see, in Perl there's many ways to do it[1]. This obviously can also lead to cryptic code since the styles can vary wildly and the reader obviously needs to understand all of this stuff for the code to make sense. That said, it gives a lot of expressive power to the author.
Perl is a very complex language with lots of exceptional syntax. It can sometimes be difficult to be sure that what you've tried to say is what you said. This is compounded by the fact that Perl was written as a replacement for things like awk and as such its default running mode is very relaxed. If variables don't exist they are implicitly ignored/created. If you don't quote a bareword where Perl thinks you meant a string it will magically become a string. This allows for very short, compact programs in oneliners on the command line, but also makes for difficult to debug programs. You generally need to enable strict and warnings pragmas to enforce "strict" syntax (so none of those things I just mentioned work) and to output warnings when you do something that is considered dangerous or obscure (which will print out a warning message and line number at run-time to warn you that you should change the code to be less reliant on such features).
If you're willing to learn Awk then I'd argue that you should learn Perl as well. Perl is like a super-awk/grep/sed/etc. all in one. There have been many additional features over time that improve the language a lot. Perl 6 has also just been released, which is a completely different language to Perl 5 (what people normally mean when they say Perl). Perl 6 is radically different, and has some really nice features, but it's too young to really know where it stands in practice. It will be years before Perl 5 begins to fade away so it's probably best to learn Perl 5 first and then if you feel inspired to then move on to Perl 6.
Not sure if it is "useful", but it is the one that I use more often.
#!/bin/bash
sudo aptitude update
sudo aptitude upgrade
sudo aptitude autoclean
sudo apt-get autoremove
I know, the "Updater" should do that...
I hacked up this script when I first started using Debian because I was coming from Fedora where a single command would update packages and I was afraid that I'd forget the "update" part. Additionally, I found myself getting into trouble back in the day with plain "upgrade" versus "safe-upgrade". I typically set this up as a Cmnd_Alias[1] with limited arguments in sudoers with NOPASSWD for my user account so that I can easily sudo dist-upgrade on any (so far) distro to keep the system up-to-date.
Caveat: I'm not 100% sure that aptitude or even yum[2] cannot fork an interactive shell session or arbitrary command given these commands. I gather that aptitude has a robust interactive mode, and even normal operations can prompt the user for confirmation. I'm not sure if the user is able to do something to spawn a process from that, which would allow somebody to easily compromise any of my systems by compromising my personal account... And now I've revealed this weakness to the Internetz.
The need arose because I usually run distros without the full desktop experience of Gnome/KDE/etc so I didn't have a GUI program prompting me regularly to update packages. That is, until I discovered MATE, a fork of the old Gnome 2 project. I've been trying that out for the last few weeks and am relieved that people took the time to salvage this system.
Regarding your second footnote, I usually use Ubuntu myself. I tried Fedora because I read that's the distro Linus Torvalds prefers, and ended up hating it. Debian-based distros are much easier to work with.
What really struck me is that the UI, at least in Fedora 23, seems to have been designed for tablets (Fedora's gedit is a great showcase of this). People complained about Win8, this is even worse.
As a rule you have to know what you want from a desktop in Linux because there are many flavors and often many are supported on each distro (and you can always roll your own if you aren't satisfied with that and have the skill). I don't prefer any of the default desktop environments these days because distros usually prefer Gnome 3, which I hate, or KDE which I've never had good experiences with and seems like an extra resource hog. I am currently running Ubuntu Mate at home.
I hate Ubuntu on principle, but it's a compromise between being able to play games decently without having to return back to Windows... I hope it's temporary. I avoided the default Ubuntu release because I don't like Unity, plus I gather many or most of the spying facilities in Ubuntu are built into Unity (I haven't really done my homework yet to see if any remain in Ubuntu Mate).
Prior to Ubuntu I've been running Debian for several years, both on the server and desktop. I rather like Debian, but due to the community split of ffmpeg it has left the stable version of Debian without many codecs and multimedia software... Which isn't entirely bad, but I stupidly have been keeping my music in a proprietary format instead of converting to a free one. But also most media acquired online requires proprietary codecs so you either don't watch or you need them... Another feature I lost in Debian jessie was the ability to stream movies to my PS3. It would be nice to get that back.
I like Debian. It more or less respects my freedoms and choices, is stable, and does most things you could ask for. The only reason I have shied away from it on my desktop at home is because I was getting poor performance in Counter-Strike: Source which had me at a disadvantage against other players and I was hoping that Ubuntu would resolve that. I'm not sure if it's a placebo effect or not yet, but it does seem to have helped. At least, the one night I played so far in Ubuntu.
I managed to play through Firewatcher the last couple of days too, but I had to turn the graphic qualities all to "low" to get a semi-reliable 15 fps... Barely enough to play through it, but it sufficed. I gather it's a Unity3D game so I think the Unity project just doesn't spend enough money optimizing the Linux side of things.. I also get a terrible fps in Interstellar Marines (like 2 fps).
I'm like you, bamccaig. But now I'm using Xubuntu, and it makes my computer fly...
I have run LXDE and Xfce for several months each and an more or less satisfied. LXDE is a little bit too bare-bones. Presumably you can include pieces of Xfce to gain a few features (one in particular that I like and depend on is manually rearranging the task bar of running applications). That is more or less happy, but it still is a bit bare.
I don't think that Steam tries overly hard to cooperate with an Xfce setup, and visa-versa. That's not to say that you can't get it working, but I had more trouble than I wanted. Otherwise, I was satisfied.
MATE is essentially Gnome 2, which I think most Linux users enjoyed. It's a bit heavier weight than Xfce for sure, but then my desktop isn't all that limited for resources either so I can probably afford to burn a few on a nicer user experience.
I'd honestly probably prefer a tiled window manager. I have used raw Xmonad for long periods of time on my EeePC and in Windows-hosted VMs and more or less love it. I should probably try dwm too. In any case, I'm currently begrudgingly settling for MATE because I want to give things like "Steam"a chance to Just Work(tm) for a while. I've actually run Steam on a raw Xorg server launched from within xmonad. I don't know enough about what is missing with that setup, but it lead to very poor user experience. IIRC, Counter-Strike would completely fuck up Xorg's screen resolution, would occasionally hang, etc. I think I had various assortments of other issues too. This was back in Debian wheezy, the previous release, with Steam forced with a patched installer containing Ubuntu binaries... No telling which things were the culprit, but I digress. I only have so much energy to fight with gaming platforms before I just want to start playing the damn games. Little bit at a time...
Alas, it seems as things get better they keep getting worse. I remember years ago (like college, so probably 9 or 10 years ago) installing Compiz on Fedora with Gnome 2 and getting a hardware accelerated 3D cube full of desktops. That was some flashy shit, and pretty damn cool. It also supported fancy (albeit, useless) effects such as raindrops on the screen. Pretty cool stuff to see. I imagine that still exists somewhere in the Linux sphere, but I'm sad to see that nothing practical seems to have spawned from it [yet]...
I use Xubuntu. But on my Chrome book I installed a Linux distro built specifically for my model. It came with Unity. At first I hated it.
Now I FREAKING LOVE IT. I hate that you can't customize it. I hate that it basically uses almost every combination of meta keys so there's nothing left for the user. HOWEVER, the tab and virtual space keys are absolutely amazing. It's like Windows with windows_key + left/right/up/down on crack. I can take any window, maximize it, move it to the left half or right half the screen, or move it across virtual desktops with only combinations of meta keys and the arrow keys. Shift. Shift control. Alt control. Alt shift control. All of those get used and either move, resize, or move virtual desktop.
When you have a very limited screen space, like a netbook, it's wonderful. It's the best, closest thing you can get to a tiling window manager without actually going for one.
I have four virtual desktops, all horizontal. Unity can also have VERTICAL virtual desktops! But I find that with small arrow keys, and a small screen, you either accidentally move windows up/down, or you get lost and forget where your window is across 8 freaking virtual desktops. And of course, the thing has a Celeron processor (still better than an Atom for that year) so there's no way I can multitask enough to fill more than four.
[out of order second edit]
One more Linux program I always install is Guake. It's a Quake-style drop-down console. It supports tabs and transparency. But the biggest thing is I just hit F11 and BOOM, it shows up over whatever I'm doing. I hit F12 and it's full-screen. F11 again and it's gone. For certain, very specific tasks, it's a life-saver when you have to periodically check a terminal, or refer to two split windows AND a terminal, etc. It also doesn't change when you move virtual desktops so it's one, static, drop-down terminal (plus terminal tabs) that you can keep while moving around virtual spaces.
When I'm working with Guake and Unity virtual desktops and people see me moving without leaving the keyboard, they think I'm some sort of magician. Click click click click, "here's all the computers on the network that aren't correctly joined to the domain."
[edit] Speaking of OS's. I was ALMOST ready to upgrade to Windows 10 last night. I know there are TONS of kernel improvements past 7. Whereas pre-8 kernel supported a suspend state for processes, 8 and higher actually uses that state and supports it much better. Suspending a process STOPS ALL CPU usage and can swap all memory to virtual. You can leave something up, freeze it, and not have to worry that VLC's developers are too stupid to devise a pause feature that doesn't absorb full CPU cycles. I know Metro-apps are automatically suspended when not in use. I'm not sure if others are.
HOWEVER, even yesterday, they're still coming out with the horrible amounts of tracking Windows 10 does. It really looks like they subsidized the cost of Windows 10 by whoring out their users data. Now instead of being able to track someone when they visit a site, why not track every single click? Apparently a computer completely idle will open connections to over 50 IP addresses and make over 300-connections in an 8-hour period... with ZERO user interaction. That's despicable. It's also a huge freaking HIPAA violation. You can't run an OS that sends people's clicks over the internet in a medical office. That's illegal as hell.
They're also sneaking those tracking features into Windows 7 and 8. But you can remove/disable those KBs. You cannot in 10. Even with every license box checked, it still logs out.
Also, one last thing. Apparently, the only version that you're even allowed to opt-out is the highest, Enterprise edition. (But it still dials home.) So yes, they really did subsidize (whore) their OS out to advertisers to lower the price.
Before this happened, I was really routing for Microsoft. I hate Apple's walled garden. Microsoft was making open-source changes. I was hoping they'd continue to be a more open, honest company. Now it turns out they've become the operating system ad-sense.
Okay, I can't be the only one who sees the irony in a post complaining about tracking linking a Forbes article, which I can't read unless I disable Adblock. Not going to happen. I'll turn off Adblock for sites I like and trust, but generally not for big commercial entities and definitely not when they try to coerce me into doing so.
That said, I'm running Windows 10 on my machines and love it, it's much more responsive than even Win8 (which is itself blazing even compared to 7). Then again I don't much care for wearing a tinfoil hat (I find them to be crippling), so there's that. I respect those that do, of course, so don't take that as me being condescending.
edit: Anyway, getting away from the politics of it all (which is about as likely to generate light as the religious thread, and I'd hate to see this thread descend to that level), I can understand why everyone hates GNOME 3, having seen it firsthand in Fedora, but what I don't understand is the collective hard-on for GNOME 2. Back when that was the norm for most Linux distros, it was the biggest thing that kept me from using Linux as an everyday OS. To me it felt like a bastard child of Windows 98 and pre-OSX Mac, taking the worst of both. Given the choice between GNOME 2 and KDE 3, I'd pick the latter every time.
Granted though, KDE 4 is even worse than any version of GNOME will ever be. I like to pretend it doesn't exist.
Okay, I can't be the only one who sees the irony in a post complaining about tracking linking a Forbes article, which I can't read unless I disable Adblock. Not going to happen.
I've seen that page with both Adblock Pro and uBlock origin. Maybe they just hate you specifically.
Anyhow, direct source the article quotes:
https://voat.co/v/technology/comments/835741
[edit] Damn it, now they deleted their post?
One of the F'd up things is tracking that hits akamaitechnologies.com. An operating system bundling with an ad content delivery network is ASKING FOR TROUBLE. There's already a known virus for that specific network:
http://blog.mitechmate.com/remove-deploy-static-akamaitechnologies-com-virus/