Skipping boost and stl libraries while debugging

This trick will allow you to step over boost and stl functions when debugging. It will not allow you to step into any of that stuff. Yippee.
The downside is that if you want to follow where a boost::signal ends up, you won’t be able to unless you already know where it’s going.

  1. Create a file called: boost_stl.natstepfilter
  2. Dump contents below into it.
  3. Put that file into your “C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Packages\Debugger\Visualizers” directory.


Hyperdimensional Beings

Do you believe in angels or demons, or ghosts that are destined to wander the earth for who knows how long? I love looking for connections between what we know scientifically (or what we’ve theorized) and what religions have taught and passed down. For example, I am Catholic, so I believe in angels, demons, souls in purgatory, God, Jesus, Holy Spirit, saints, hell, among other things.

I’m not too good at math stuff or the physics stuff, but I like to theorize in my spare time because I think it’s fun and to me it makes sense. For example, one of my theories is about angels and demons.

I believe angels and demons exist, but we can’t see them. Why? It’s obvious from a physics perspective. Maybe this isn’t physically accurate, but it’s just a thought. For example, let’s assume that we live in a 2D world, similar to a painting. We can only look forward and backward and that’s all we can see. We can only see things on our flat piece of art work. It’s impossible to look perpendicular outside of our picture because that’s a higher dimension. Even if we could see it, we wouldn’t be able to comprehend it. It might be like looking into the future or the past.

I think that maybe demons, angels, people in purgatory all live in a higher dimension. Where they can see us (on a painting), but we can’t see them. They can interact with us if they want as well. If we raise the concept to a 3D level, they live in the fourth dimension or 5th, but we can’t see them. But they can see us quite easily, like looking at a painting.

But sometimes angels and demons interact with us, which is quite possible. We have demons that possess people and we have angels that appear to people. This is about the same as flattening yourself and injecting yourself into a 2D painting. Not something they probably want to do too often. Sometimes people report shadows that move, or foot prints in carpet, but they see nothing. This should make sense because I can take my finger and scratch off a piece of the painting, but light only interacts in our 3D space with materials in 3D, not 4th or 5th dimension. Or does it? Sometimes people see things, but it’s faint. Maybe a stupid orb or a weird floaty thing. Their cameras do weird things too.

Electromagnetic interference is frequently reported when “ghost hunters” go into haunted houses and the lights do strange things, and the devices go crazy. Lights flicker and sometimes you hear sounds. We know that light is just another electromagnetic frequency, so it might bounce off of something if they are only a little bit in our dimension. But to us, it looks like weird floaty things.

For example, me putting my finger on the piece of art work. The majority of my finger is not on or in the painting, just a tiny bit touches the surface. But what the 2D person sees is this weird line that is suspended in mid air. It appears and disappears when I want it to. Light even bounces off of it. It can even cast a shadow or push things over, since I’m directly touching the stuff in the painting. (if the stuff in the painting actually contained movable objects).

I think sometimes we assume that electromagnetism only happens in our 3D world. If we don’t know there are other dimensions, and things in those dimensions we can’t see (so we assume nothing is there), we will be unable to describe weird electromagnetic happenings. It will be impossible. For example, today physicists talk about “dark energy” and/or “dark matter” that they throw into their simulations to get things to simulate properly. Come on, we know what that crap is. It’s stuff in a higher dimensions that warp spacetime. It has mass, it has energy, we just can’t see it but yet we know it’s there because our simulations fall apart without these “fudge factors”.

Just because we can’t see heaven or hell, or demons angels or God, that doesn’t mean it doesn’t exist. Prove those things exist? It’s in another dimension perpendicular to ours. It’s not possible for us to do so. If you’re in an elevator rapidly accelerating up, can you prove the elevator is rapidly accelerating up? Or are you being pulled downward? It’s not possible for you to know. Similarly, proving that God, satan, demons, or angels exist or don’t exist is a stupid request. Physically. Really it is. We need a different reference frame. It’s the same as asking me to prove there are more than 4 dimensions. How does someone that lives in a 2D plane prove there’s a third dimension to someone else who also lives in a 2D plane?

Singletons with controlled destruction

I’m not going to debate about why one should or should not use singletons. I’m only going to provide an improvement for the singleton. One of the problems with singletons is that they use a static member. The problem with static members is that they can be destroyed in any order on application exit. If you have more than one singleton, and have dependencies between them, you want to control destruction of the singleton on application exit.

class StuffManager
    StuffManager* instance();
    void destroy();
    ~StuffManager() {};

    StuffManager() {}
    static StuffManager* sm_;

#include "StuffManager.h"

void StuffManager::destroy()
    delete sm_;
    sm_ = nullptr;

StuffManager* StuffManager::instance()
    if (sm_ == nullptr)
        sm_ = new StuffManager();
    return sm_;

The differences between this and the Super Simple Singleton I previously posted about a couple of years ago are:

  1. The instance is a member of the class, not a function.
  2. There is a destroy() function

The destroy function allows for controlled destruction of the singleton, which is a million times better. When your application exists, and you have more than one singleton and they are dependent upon each other, who knows which one will get destroyed first. This gives you a lot more control over it.

Todd’s Practical GTD Agile Method

I love GTD, but it definitely has some issues that I don’t like. Here are some good links to learn GTD. I also recommend reading his book, or not. It’s long and boring actually. What I ended up doing is merging a few things between GTD and Agile. I’m definitely not any sort of Scrum Master or Agile guru by any means, but I do find some things useful about it when it comes to project management.

I use to implement my GTD / Agile Methodology, but my method is a little different and I will explain why. Come back here after you’ve read at least the first link above.

I organize by doing the following:

I use the tag system heavily. The folder system is too limited. If I want to categorize one item into two boxes I should be able to do so. They work exactly like the folders but it’s substantially more flexible. The folders just aren’t necessary.

Read the rest of this entry »

C++ Best Practices

I love best practices. It’s like a jump start or a synopsis of all the things you should do without reading through all the heavy books.

In and out params

For functions, it’s handy to be able to identify what’s an “in” parameter, versus what’s an “out” parameter. I just got done reading an article about the Doom3 engine source code and there was something that I liked.

Data function(const int val, const Data1& d1, Data2** d2, Data3* d3);
  1. From this signature you can identify what is an “in” and what is an “out”.
  2. The return type is obviously an “out”.
  3. The “const int val” identifies this as an “in”, even though it’s just an int.
  4. const Data& d1 is an “in”.
  5. For d2, it’s an out, and this is obvious because it takes a double pointer.
  6. For d3, this is also obviously an out. You want the function to fill something out.
  7. All out parameters use pointers, all in parameters use const or const refs.
  8. Never use a parameter as both an “in” and an “out”.

For #6, when you call this function look at what it looks like.

Data d = function(counter, data, &ptr, &data);

You can tell by looking at how it’s called what is an input and what is an output. If it has an &, you know it’s an out. Except for the return value obviously. You also have a guarantee that counter or data won’t be modified, and it won’t be modified inside the function. The function could copy the value and change it there as a separate variable on the stack, but that requires extra work. This is all about simple guarantees. I really like this approach, and I think I will adopt it for my standards.

Tools I use

Some of the tools I use to manage my tasks (life) are the following:


The website is a task management system based around the GTD technique. It’s great because you don’t have to use a particular technique if you don’t want to. For example, I use the “tags” feature to assign tags to all of my tasks for organization. It also have a handy feature where you can create tasks by sending an email to a special email address. This is handy when used in combination with a “send email widget” you can install on your phone to quickly send an email. Whenever I send an email, it’s automatically “starred” so during my weekly review, I know what to turn into actionable tasks. You can assign due dates, priorities, contexts, etc. The only thing I feel it doesn’t or can’t capture very well are files and research. You can upload files, but it requires a more expensive plan to store data. I just use evernote for this, but evernote also has limits. It has a calendar, which I use heavily and I can sort things any way I want. I looked around at a lot of different task management systems (including stuff like Jira), but I have found nothing better. It can create goals, store outlines and notes among other things as well, but I find the notes are better captured in the tasks themselves or with Evernote.


Using a pomodoro app is useful to doing 30 minute or 50 minute sprints. You can use an ordinary timer, but there are other apps you can use on your phone, which is easier to use if you’re at a coffee shop or something. These are very helpful so you can timebox your tasks so you don’t go overboard. It has many benefits, such as forcing you to get as much done as you can in 30 minutes, so you concentrate more. It prevents you from working too much on one task so you don’t waste time and you don’t get get burnt out on that one task.

Read the rest of this entry »

Vim – Jumping around the page

These are my favorite shortcuts for jumping around the page.

z+t = Scroll the current line to the top of the screen

z+b = Scroll the current line to the bottom of the screen

z+z = Scroll the current line to the center of the screen

H = Move the cursor to the top (high) of the screen.

L = Move the cursor to the bottom (low) of the screen.

M = Move the cursor to the center (middle) of the screen.

Task Management – 01

I’ve been learning a lot about how to manage my time (or tasks rather) more effectively. How do you get more done in a day without causing more stress and still have time for leisure. There are a few things that I personally do and I’ll just be keeping a log here on this site of some of the helpful tips and things I’ve been doing.

  1. Timebox your tasks. Use the pomodoro technique.
    1. Do 50/10 for things like projects, if you can. But sometimes this can be difficult.
    2. Do 25/5 for smaller things like homework or habits that don’t end.
    3. Alternate between difficult and easy pomodoros. For example in order “write book”, “clean out email inbox”, “do homework”, “lift weights”. Hard easy hard easy.
    4. There are many pomodoro timers for your phone. Get one.
  2. Do the most important things in the morning. After you get home from work, you won’t want to do them because you’ll be worn out.
  3. Change your environment. Coffee shop, library, or some other place to keep you away from distractions.
  4. Listen to ambient sound or noise. I listen to thunderstorms or rain + trains. It drowns out the chatter and noise from Starbucks and lets me focus a lot easier.
  5. Mix in easy very short healthy habits between pomodoros such as “drink full glass of water”. Or you can set them up to start the pomodoro instead of using your break time.
  6. Use the S.M.A.R.T. system for goals. More on this in a later post.

Next time, I’ll discuss GTD and the tools I use.

Clang omp XCode

How to switch from clang to clang-omp. I am using Yosemite and XCode 6.2.
Some of this was taken from Stack Overflow.

brew install clang-omp
cd /usr/bin
sudo mv clang clang-apple
sudo mv clang++ clang++-apple
sudo ln -s /usr/local/Cellar/clang-omp/2015-04-01/bin/clang-omp++ ./clang++
sudo ln -s /usr/local/Cellar/clang-omp/2015-04-01/bin/clang-omp ./clang
cd /Applications/
sudo mv clang clang-apple
sudo mv clang++ clang++-apple
sudo ln -s /usr/local/Cellar/clang-omp/2015-04-01/bin/clang-omp++ ./clang++
sudo ln -s /usr/local/Cellar/clang-omp/2015-04-01/bin/clang-omp ./clang
cd /Applications/
sudo mv -f * ../../

Create a project in Xcode, using the Hello World code on clang-openmp website for test.
Add “-fopenmp” to Custom Compiler Flags -> Other C Flags in project settings.
Add /usr/lib/libiomp5.dylib to the build phases of project (project settings -> Build Phases -> drag /usr/lib/libiomp5.dylib into Link Binary with Libraries)
Set Header search path: /usr/local/include/libiomp

Don’t forget to add #include


One of the biggest hassles with strings is doing what you want quickly with a one-liner. I recently stumbled across this little gem in boost. boost::algorithm::ends_with(). Basically, it’s a quick check to see if a string ends with another string. No messing around with indices and decrementing iterators, etc.

#include <boost/algorithm/string/predicate.hpp>

std::string text = "stuffing";
bool res = boost::algorithm::ends_with(text, "ing"));
assert(res == true);

Mac dylib rpath framework bundle explaination

The purpose of this article is to teach you how to properly configure a dylib / framework on the mac and the application linking against it. If you don’t do this properly, you could be in for a long time wasting ordeal. In this article we will describe, in enough detail, how the linking process works in order to get things working and hopefully teach you enough to configure things on your own if you decide to do something in a non-standard way. We will also give you the tools and knowledge required in order to debug issues related to library loading, particularly with rpaths.

The subject
When you load an application, the dynamic linker (it’s actually an application on the mac called dyld I believe), will look at all of the references (symbols) in the application and try to resolve them (perhaps on load, perhaps not, who knows these days). In order to do this, it needs to know what libraries this application depends upon, where the libraries are on the hard drive, and also, what libraries those libraries depend upon so it can load them too, recursively. If the dynamic linker can’t find these libraries, it can’t load them, bottom line. You’ll end up with an error such as “bundle is corrupt, invalid, or missing resources” or some error like that.

Read the rest of this entry »

operator= for a derived class

It didn’t occur to me until now that I have never actually written an operator= for a derived class until now. Or maybe I wasn’t doing it right, or maybe I completely forgot… Here is how you’re supposed to do it:

D& D::operator=(const D& other)
    // Copy other data here...
    return *this;

I don’t ever recall invoking operator=() manually this way. It looks odd, but it should make sense because if you don’t, the base class will never be copied and that’s what you’re explicitly doing.

boost::uuid for unique identifiers

Frequently when I need a unique identifier I might do something silly, like create a static variable and every time an instance of that object is created, I would increment the counter. There are concerns of overflow and other weird things, such as saving and loading these identifiers and syncing the counter back up…

Instead, I would highly recommend using boost::uuid to create a unique identifier. It’s basically a GUID or a UUID. The best part is that it’s cross platform, so it works on mac/linux/windows. It goes something like this:

#include boost/lexical_cast.hpp
#include boost/uuid/uuid_generators.hpp
#include boost/uuid/uuid_io.hpp

boost::uuids::uuid uuid;
uuid = boost::uuids::random_generator()();
std::string boost::lexical_cast<std::string>(uuid);

The syntax is a bit weird for generating a uuid, but that’s how it’s done.

You can use boost::lexical_cast to convert between the uuid and a string. As of right now, boost::serialization can not serialize boost::uuids::uuid objects, for some reason, so the best thing to do is to lexical cast to a string, serialize it, then lexical cast back into a boost::uuids::uuid data type.

Reducing API modifications

A colleague of mind gave me a really cool idea that’s so simple I never thought about it from a design perspective. Sometimes, a function might take several arguments, where each argument represents a configuration value.

void Func(int important_data,
          int important_data2,
          const std::string& setting1,
          float setting2,
          double setting3)
    // Do stuff here...

If, in the future, you need to add more settings you can add more parameters… But that causes the API to change, which is a huge hassle most of the time. Instead, it’s better to create a struct and pass that in. That way, the API is kept the same. You probably want to initialize the struct to some good defaults though.

struct SettingsPack
    const std::string& setting1;
    float setting2;
    double setting3;
    // Could add any number of settings here in the future.
void Func(int important_data,
          int important_data2,
          const SettingsPack& settings)
    // Do stuff here...

So now, it doesn’t matter how many settings you might add in the future, the API doesn’t change. In fact, if you look at the Win32 API, you’ll see they do the same thing. The function takes a struct, and a bunch of stuff you need to set in the struct. This allows the API to be usable well after it should be be killed without the older apps crashing. 🙂

Comments – 01

I think the APIs should be most self documenting. So I prefer to not add any comments unless it somehow helps. Obviously, this doesn’t help if the user wants doxygen documentation. If you do add comments, they should be in the .cpp file only. The reason for this is because reading the .h file is nice, neat, and compact. Otherwise, your header files are massively huge and difficult to digest. But if you do need comments, make sure they’re doxygen style.

So in the cpp file, prefer to have something like this:

// ----------------------------------------------------
void add(int val1, int val2)


There’s no point in adding comments to this function because it should be obvious. But I prefer to have the divider for readability. If you need comments, add them doxygen style, like this:

/** ----------------------------------------------------
 * @brief This describes the function method.
 * @param name [in] The name of the object.
void getObject(const std::string& name)


This of course causes problems. If all of your function comments are in the cpp, you need the cpp to read the documentation. This is okay for me, but not okay if someone else uses the library precompiled. But then again, you’re probably going to have to compile the sources yourself anyways since there’s no guarantee the ABI won’t change between versions of the tool chain.

This is a catch 22. You must build doxygen comments in order to see the documentation. Or ship the source code. I really don’t care because I don’t intend on publishing any of my code… Right now anyways. But I’m not sure if it would make my life easier just putting them all in the .h file.

Vim Reference Sheet 2

hjkl – move cursor around
^ & $- move cursor to beginning / end of line
gg & G – move to top / bottom of file
<num>G – move to line number
b, w, e – goes forward and backwards by a word at a time
[[ & ]] – moves to next / previous set of function parentheses
% – Goes to opposite brace, parentheses, etc.
Ctrl + u & Ctrl + d – page up and page down
gd – moves cursor to local declaration and highlights variable
gD – moves cursor to global declaration and highlights variable

s & S – deletes character / line at cursor position and enters insert mode
d & dd – deletes character / line at cursor position
D – deletes from cursor until end of line
y & yy/Y – yank character / line at cursor position
x & X- deletes character at / before cursor position
o & O – inserts new line above of below cursor & enters insert mode
p & P – pastes yanked text before / after cursor
~ – changes case of character
Shift + J – Concatenate current line and next line.

/<string> – searches for a string & highlights all results
n & N – jumps forward / backwards through highlighted search results
f & F<char> – searches for first occurs of <char> forward / backwards
* – highlights word at cursor position
g* & # – highlights word at cursor position but can find “rain” in “rainbow”, * searches forward, # searches backwards

i & I – enters insert mode before cursor position & enters insert and places cursor at start of line
a & A – enters insert mode after cursor position & enters insert and places cursor at end of line
r & R – replace mode for single character or until user presses <esc>
v & V<movement> – enters character / line highlight mode
. – repeats last command
m<char> – marks area with <char>
`<char> – recalls marked area
u & U – undo & undo for entire line
Ctrl + r – redo
File & Buffer Related
:n <filename> – opens <filename>
:wa – writes all buffers out
:w – writes current buffer out
:q – quits vim
:q! – quits vim without saving
:ZZ – saves all buffers & quits vim
gf – opens file under cursor

Miscellaneous Tips
Comment lines with “//”: Ctrl + V, <num>j, Shift + i, “//”, <Esc>
Delete comment markers (//): Ctrl + V, <num>j, d

Go through these
Arrow keys Move cursor

hjkl Same as arrow keys

itextESC Insert text

cwnewESC Change word to new

easESC pluralize word (end of word; append s; escape from input state)

x delete a character

dw delete a word

dd delete a line

3dd deletes 3 lines

u undo previous change

ZZ exit vi , saving changes

:q!CR quit, discarding changes

/textCR search for text

^U ^D scroll up or down

:cmdCR any ex or ed command

ESC end insert or incomplete command

DEL (delete or rubout) interrupts

:wCR write back changes

:w!CR forced write, if permission originally not valid

:qCR quit

:q!CR quit, discard changes

:e nameCR edit file name

:e!CR reedit, discard changes

:e + nameCR edit, starting at end

:e +nCR edit, starting at line n

:e #CR edit alternate file

:e! #CR edit alternate file, discard changes

:w nameCR write file name

:w! nameCR overwrite file name

:shCR run shell, then return

:!cmdCR run cmd, then return

:nCR edit next file in arglist

:n argsCR specify new arglist

^G show current file and line

:ta tagCR position cursor to tag

F forward screen

^B backward screen

^D scroll down half screen

^U scroll up half screen

nG go to the beginning of the specified line (end default), where n is a line number

/pat next line matching pat

?pat previous line matching pat

n repeat last / or ? command

N reverse last / or ? command

/pat/+n nth line after pat

?pat?-n nth line before pat

]] next section/function

[[ previous section/function

( beginning of sentence

) end of sentence

{ beginning of paragraph

} end of paragraph

% find matching ( ) or { }

^L clear and redraw window

^R clear and redraw window if ^L is -> key

zCR redraw screen with current line at top of window

z-CR redraw screen with current line at bottom of window

z.CR redraw screen with current line at center of window

/pat/z-CR move pat line to bottom of window

zn.CR use n-line window

^E scroll window down one line

^Y scroll window up one line

“ move cursor to previous context

\’\’ move cursor to first non-white space in line

mx mark current position with the ASCII lower-case letter x

`x move cursor to mark x

\’x move cursor to first non-white space in line marked by x

H top line on screen

L last line on screen

M middle line on screen

+ next line, at first non-white space character

– previous line, at first non-white space character

CR return, same as +

down-arrow or j next line, same column

up-arrow or k previous line, same column

^ first non-white space character

0 beginning of line

$ end of line

l or -> forward

h or <- backward

^H same as <- (backspace)

space same as -> (space bar)

fx find next x

Fx find next x

tx move to character following the next x

Tx move to character following the previous x

; repeat last f, F, t, or T

, repeat inverse of last f, F, t, or T

n| move to column n

% find matching ( ) or { }

w forward a word

b back a word

e end of word

) to next sentence

} to next paragraph

( back a sentence

{ back a paragraph

W forward a blank-delimited word

B back a blank-delimited word

E end of a blank-delimited word

^H erase last character (backspace)

^W erase last word

erase your erase character, same as ^H (backspace)

kill your kill character, erase this line of input

\\ quotes your erase and kill characters

ESC ends insertion, back to command mode

CTRL-C interrupt, suspends insert mode

^D backtab one character; reset left margin of autoindent

^^D caret (^) followed by control-d (^D); backtab to beginning of line; do not reset left margin of autoindent

0^D backtab to beginning of line; reset left margin of autoindent

^V quote non-printable character

a append after cursor

A append at end of line

i insert before cursor

I insert before first non-blank

o open line below

O open line above

rx replace single character with x

RtextESC replace characters

d delete

c change

y yank lines to buffer

> left shift

< right shift

! filter through command

C change rest of line (c$)

D delete rest of line (d$)

s substitute characters (cl)

S substitute lines (cc)

J join lines

x delete characters (dl)

X delete characters before cursor dh)

Y yank lines (yy)

3yy yank 3 lines

3yl yank 3 characters

p put back text after cursor

P put back text before cursor \” .nr )I xp\”n

put from buffer x \” .nr )I xy\”n

yank to buffer x \” .nr )I xd\”n

delete into buffer x

u undo last change

U restore current line

. repeat last change \” .nr )I dp\”n

retrieve d\’th last delete

Linux Quick Reference

General Sys Admin
Misc Commands
uname -r – Displays kernel version
useful for changing between bases – calculator: bc; obase=#; ibase=#; input base # and output base #
editing the runlevel /etc/inittab
changing the runlevel on the fly: init #
Get version of ubuntu: lsb_release -a

Hardware Information
network settings: /sbin/ifconfig -a
displays cpu info: cat /proc/cpuinfo
displays devices: cat /proc/devices
list hardware: /sbin/lspci -vv
displays system status: procinfo -a
free memory: free -m
checks hard drive specs: hdparm -I /dev/device
lspci -nn – Shows hardware connected to the pci bus
lsusb – Shows USB connected hardware
lshw -C usb – Additional info on USB related hardware (good for USB dongles)
cat /proc/cpuinfo
cat /proc/meminfo
cat /proc/zoneinfo
cat /proc/mounts

File / Drive / Directory Commands
pwd – print working directory
check file or directory size: du -sh filename
get partition information: df -alh
mount a drive: mount -t dos floppy /dev/fd0
find something: apropos or whatis, which, locate (updatedb), whereis, find / -print | xargs grep
zip up a folder: tar czf filename.tar.gz folder
unzip a folder: tar -zxvf filename.tar.gz
Download a file: scp @:
Upload a file: scp @:

Process Management
ps -aux | grep program_name
list processes: ps -eflea
list processes interactively: top or htop
identify processes using files or sockets: fuser filename
run a process in the the background after logging off: nohup command & OR command &
atq : views the pending jobs
watch PROGRAM_NAME – keeps running the same program over and over
crontab -l : Displays crontab jobs
crontab -e : Edits the crontab job file
at : schedules a one time task
batch : runs a task when the system load average is low

top : interactive process management
t: Displays summary information off and on.
m : Displays memory information off and on.
A: Sorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.
f: Enters an interactive configuration screen for top. Helpful for setting up top for a specific task.
o: Enables you to interactively select the ordering within top.
r: Issues renice command.
k: Issues kill command.
z: Turn on or off color/mono

User Management
who – Who is connected to the machine
last – last users who’ve logged in
rwho -a

Service Management
service status | start | stop | restart
sudo /etc/init.d/service-name (start|stop|restart)
/usr/sbin/ntsysv : allows you to modify system services
/usr/sbin/chkconfig : allows you to modify system services
/sbin/chkconfig –list : lists systems services and their state
service –status-all

Package Management
apt-get install : Installs package
apt-get remove : Removes package
apt-get –purge remove : Removes package & configuration files
apt-get update : Updates the the package listings from the mirrors on the servers
apt-get upgrade : Displays list of upgrades for package
apt-get dist-upgrade : Similar to apt-get upgrade, except will install or remove packages to satisfy dependencies
apt-cache search “TextToSearch” : Searches description and package names for a keyword
Ex: apt-cache search “Intrusion Detection”
Ex2: apt-cache search sniffer
apt-cache depends : Lists package dependencies
apt-cache showpkg : Shows more details about the package
apt-cache show : Same as dpkg -s

dpkg -l : Lists all installed packages
dpkg -l : Lists individual package
dpkg -l ‘**’ : Lists packages related to dpkg -L : Lists files owned by the installed package
dpkg — contents sudo_1.6.7p5-2_i386.deb : Lists files owned by not installed package
dpkg -S /bin/netstat : Finds what package owns the /bin/netstat file
dpkg -s | grep Status : Checks if package is installed or not
dpkg -s : Lists lots of info about the package installed

Networking Commands
ifconfig – lists IP address (similar to ipconfig in Windows)
/etc/rc.d/init.d/network start – start the network service
sudo ifconfig up/down – Brings up/down the interface for the specified interface
/etc/init.d/network restart – restarts the interfaces
activating your NIC: /sbin/ifup eth0 or ifconfig eth0 up
deactivating your NIC: /sbin/ifdown eth0 or ifconfig eth0 down

sudo dhclient – Request IP address from DNS server for specified interface
sudo dhclient -r – Release IP address associated with specified interface
sudo iptables -L – Lists firewall rules
/etc/iftab (Feisty and pre-releases (Edgy, etc)) – /etc/udev/rules.d/70-persistent-net.rules (Gutsy) – File which assigns logical names (eth0, wlan0, etc) to MAC addresses
cat /etc/resolv.conf – Lists DNS servers associated with network connections (Network Manager)
/etc/dhcp3/dhclient.conf – File which sets or modifies dns (domain name servers) settings

Lists open ports:
lsof -Pnl +M -i4 : Lists open ports
lsof -Pnl +M -i6 : Lists open ports
-P : This option inhibits the conversion of port numbers to port names for network files. Inhibiting the conver-
sion may make lsof run a little faster. It is also useful when port name lookup is not working properly.
-n : This option inhibits the conversion of network numbers to host names for network files. Inhibiting conversion may make lsof run faster. It is also useful when host name lookup is not working properly.
-l : This option inhibits the conversion of user ID numbers to login names. It is also useful when login name lookup is working improperly or slowly.
+M : Enables the reporting of portmapper registrations for local TCP and UDP ports.
-i4 : IPv4 listing only
-i6 : IPv6 listing only

cat /proc/net/tcp |perl -lane ‘(undef,$p)=split “:”,$F[1]; print hex($p).”\t”.getpwuid($F[7]) if $p’|sort -n|uniq -c
netstat -tulpn
netstat -npl
-t : TCP port
-u : UDP port
-l : Show only listening sockets.
-p : Show the PID and name of the program to which each socket / port belongs
-n : No DNS lookup (speed up operation)

Shows service listening on port 8080:
cat /etc/services | grep 8080

route -n OR netstat -rn : Shows current gateway
Check routing cache: /sbin/route -Cn
route : shows/modifies current routing table
nslookup or dig – shows info about the server’s ip
iwlist scan – shows wireless networks that are available in the area along with basic encryption information
lshw -C network – Shows interface and driver associated with each networking device
sudo route add default gw – Example of how to set the default gateway to
sudo route del default gw – Example of how to delete the default gateway setting
mtr – my traceroute
change finger information: chfn
talk username [terminal-name]
write username [terminal-name] – must be using the same computer
arp -e : shows other systems’ MAC addresses
cat /proc/net/arp : shows current arp table
iptables, ipchains(old)
socklist : lists open sockets,type,port, process id and name, use fuser or kill
host : same as nslookup but will use both hosts file as well as DNS
nslookup : returns an ip address give a hostname
netstat – displays connections, routing tables, stats, etc.
netstat -punta : list externally connected processes
netstat -nap : list all connected processes
netstat -s : show network stastics
netstat -a -i eth0 :kernel interface table info
iptraf – program for monitoring lan traffic
tcpdump – allows you to analyze certain packets based on a criteria
nmap -sP : scans network for pingable ip addresses
Wireshark – network protocol analyzer

important files:
Environment Variables
To make environment variables set themselves at every login
place them in .bash_profile in the user’s home directory

– To add to an already existing one
export PATH

– To Create a new one
export PATH
check environment variables: set, env

– To reload the .bash_profile type:
source ~/.bash_profile

Common process kill signals:
SIGHUP – the modem connection has been broken
SIGQUIT – the process should stop and produce a coredump file as a debugging aid
SIGINT – the user has struck the interrupt key (^C)
SIGKILL – signal 9
SIGTERM – the default termination signal sent by kill

Octal Permissions
read: 4
write: 2

Sections of the Manual
man pages sections: man # command
1) Commands
2) System Calls
3) Library Functions
4) Special Files
5) File Formats
6) Games
7) Miscellaneous Information
8) Maintenance Commands

SVN Client Commands
checkout: svn co svn://hostname/myproject myproject
update: svn update
svnserve -d –foreground -r /home/svn

Module and kernel manipulation
1) list loaded modules: /sbin/lsmod
2) Determines if the module is compatable with the kernel
High level handling of loadable modules
Loads module and dependencies : modprobe module
3) Remove a loaded module: /sbin/rmmod modulename
4) Inserts a module into the active kernel: /sbin/insmod modulename
5) Creates dependencies file for a module (used by modprobe): depmod
6) Display information about a kernel module: modinfo
cat /etc/modprobe.d/blacklist – List modules that will not be loaded by the Operating System at boot time
lsmod – lists currently loaded kernel modules. (Example usage – lsmod | grep ndiswrapper)
sudo modprobe ***** – Loads the kernel module **** . (Example usage – sudo modprobe ndiswrapper, sudo modprobe r818x, sudo modprobe ath_pci)
sudo modprobe -r **** – Unloades the kernel module ****. (Example usage – sudo modprobe -r ndiswrapper)
dmesg | more – Lists boot log \u2014 good for troubleshooting problems with modules/drivers not being loaded

XWindow Commands
xorgcfg – Graphical configuration tool for XFree86 4.0
X -configure – generate an XF86Config file
glxgears – displays an openGL demo app
xvidtune – allows you to adjust monitor and graphics properties
how to restart xfree86 server and client:
shut down X11: sudo killall gdm or init 3
start with: init 5
Important XWindow Files:
Specifies which window manager to use and what
applications to start: $HOME/.xinitrc
Same as above: $HOME/.xsession
X Window Display Manager, automatically starts the xserver: xdm
$HOME/.Xdefaults: resources are stored here

logging into an ssh server: ssh username@ipaddress
starting a vncserver: vncserver
killing a vncserver: vncserver -k :# (where # is the display number created when you started the server)
getting regular desktop: modify the $HOME/.vnc/xstartup file
Download a file: scp @:

Printer Management
http://localhost:631 – used to manage cups – use root as the login
Current Printer Description from CUPS management webpage:
hppsc1510 (Default Printer)
Description: HP PSC 1510 All-In-One
Location: HP PSC 1510 All-In-One
Make and Model: HP PSC 1510 Foomatic/hpijs (recommended)
Printer State: idle, accepting jobs, published.
Device URI: hp:/usb/PSC_1500_series?serial=MY62ND30C90498

Open ports 50000 and 50002 in the firewall. Check /var/log/messages to verify.
Getting network printing to work from windows
Make sure the “hplip” service is running to take advantage of all the features
1) Install HP Printer on windows machine using normal USB cable
2) Install HP Printer on linux machine using normal USB cable using the hplip driver and CUPS
3) Set up samba and CUPS together.
4) Browse to My Network Places and make sure the printer is in there.
Double click it (connect to it) and install the HP PSC 950 ( or something ) driver.
5) After the driver is installed, go into “Printers and Faxes” and make sure it is
in there.
6) Right click on it and go to properties->advanced and select the correct driver (HP PSC 1510 driver)
7) Click on “Color Management” and add all the HP_PSC_1600…. files. Also add the “sRGB Color Space Profile”

Other common ops:
Send a file to the printer: lpr or lp
Shows print jobs in queue: lpq or lpstat
Cancel print job: lprm or cancel
Installing sun Java on ubuntu 11.04
sudo add-apt-repository ppa:ferramroberto/java OR sudo add-apt-repository “deb lucid partner”
sudo apt-get update
sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-jdk
sudo update-java-alternatives -s java-6-sun
ls /usr/lib/jvm -> To find out which one you need (just use the generic link)
sudo bash -c “echo JAVA_HOME=/usr/lib/jvm/java-6-sun/ >> /etc/environment”
Add : $JAVA_HOME/bin to the /etc/environment file under the path file.
java -version

Logging into mysql: mysql -u root -p
Restart mysql: “mysqld restart”

mysql commands:
show tables;
show databases;

Set up so you don’t have to type in password

On the client machine that gets prompted for the password when you attempt to scp, do the following:

cd ssh-keygen -t rsa
scp /root/.ssh/ ::/home//id_rsa.pub2 (otherwise rename it to pub2 so it doesn’t overwrite the one on the server)
ssh @
cd /root/.ssh
cat /root/.ssh/id_rsa.pub2 >> authorized_keys
Delete the id_rsa.pub2 file when you are done.

Ubuntu Firewall
ufw enable
ufw disable


export P4IGNORE=/Users/Todd/Desktop/codes/sources/.p4ignore
export PATH=/Developer/NVIDIA/CUDA-6.0/bin:$PATH
export PATH=/usr/local/bin:/Developer/NVIDIA/CUDA-6.0/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/texbin
alias cds=”cd /Users/Todd/Desktop/codes/sources/project”
alias cdb=”cd /Users/Todd/Desktop/codes/builds/project”
alias ls=”ls -alhG”

# Setting PATH for Python 2.7
# The orginal version is saved in .bash_profile.pysave
export PATH


:inoremap jk
:nmap ,w :w
:nmap ,O O
:nmap ,o o

set tabstop=4
set shiftwidth=4
set expandtab
set number
set relativenumber

“nmap O
“nmap o

“map i // ——————————————————————————————–^