Never Ending Security

It starts all here

Monthly Archives: March 2015

What is the Difference Between Client Bridge & Wireless Repeater Modes in DD-WRT?


DD-WRT router firmware distinguishes itself in many way but one of the most useful is the simple setup of Wireless Modes within its interface. Many consumers and network administrators turn to DD-WRT when seeking for the optimal choice in setting up a Client Bridge. Once they go DD-WRT, they never seem to go back due to the simplicity, customization possibilities, and the ease of the setup process.

There are a few basic networking terms to become acquainted with before reading further.

AP (Access Point) – The standard wireless mode for most routers in DD-WRT.

DHCP (Dynamic Host Configuration Protocol) – Automates network-parameter assignment to network devices. Simply, it is a process that allows a router to automatically assign connected devices local IP address

NAT (Network Address Translation) – The process of modifying IP address information  while in transit across a router.

WDS (Wireless Distribution System) – A system enabling the wireless interconnection of AP allows a wireless network to be expanded using multiple access points without the traditional requirement without having to be wired themselves.

A breakdown of the available bridging modes available in DD-WRT- How to Set Client Bridge/Wireless Bridge/Repeater Mode in DD-WRT

How Does the Client Wireless Bridge Differ from Repeater Mode?

To put it simply, a Client Bridge links computers while a Wireless Repeater connects routers.

These mode changing options can be found in later builds of DD-WRT under the Wireless –> Basic Settings Tab (as seen in the image above). The default mode in DD-WRT is AP, which sets your router up as a standard access point for users.

A Client Bridge can connect disparate pieces of a company of home network that were previously unable to connect through a router. The intended use for a Repeater is  to take a wireless signal from a network and giving it a new-found, extended range.

Placing a Repeater in an opportune location can significantly strengthen a computer’s connection and network signal from a primary gateway. A Repeater is useful in a home or office when you are trying to boost wireless connection strengths, wireless range, and overall network sensitivity.

Client Bridges are increasingly popular for creating secured wired connections without the involvement of wireless signals. With Client Bridges, the WLAN and the LAN are on the same subnet. Consequently, NAT is no longer used and services that are running on the original network (like DHCP) work seamlessly on the the created bridged network.

Inside a client bridged network, computers can see one another inside a Windows Network. However, the router will no longer accept wireless clients or broadcast beacons as it would in Repeater mode, minimizing the outside accessibility to the network.

If you are looking to extend wireless access to more remote parts of a home or office then the Repeater is the way to go. If you are looking to create a more seamless integrated network of computers without concern for extended wireless signal, then a Client Bridge could be the solution.

What is the Difference between the alternate DD-WRT Repeater Modes?

  • Repeater
    A) DHCP & NAT enabled
    B) Clients on different subnet from primary router.
    C) Computers connected to one router cannot see computers connected to other routers in Windows Network.
  • Repeater Bridge
    A) Wireless Repeater capabilities with DHCP & NAT disabled.
    B) Clients on the same subnet as primary router.
    C) All computers can see one another in Windows Network.
  • Universal Wireless Repeater
    Uses a program/script called AutoAP to keep wireless connection with the nearest/optimal host Access Point.

Explanation of Alternative Wireless Modes in DD-WRT

Client Mode (AP Client)
Used to link two wired networks using two wireless routers without creating a bridge. Computers on one wired network can not see computers on other wired network in Windows Network. Client mode allows the router to connect to other access points as a client.

Client Mode in DD-WRT turns the WLAN portion of your router into the WAN. In this mode, the router will no longer function as an access point (doesn’t allow clients), therefore, you will need wires to use the router and to configure it. The router won’t even be visible to your own wireless configuration software or Wi-Fi packet sniffer software like Wireshark, Kismet or Netstumbler.

In Client Mode, the WLAN and the LAN will not be bridged, creating different subnets on the same router. To create FTP servers, port forwarding from WLAN to LAN will be necessary. Most users select to use client bridge mode instead of client mode.

Ad-hoc Mode
Ad -hoc mode allows the router to connect to other wireless devices that are also available for ad hoc connections. Ad hoc networks lack the typical central management of an infrastructure type network. Ad hoc mode uses STP (Spanning Tree Protocol) not WDS. Think of this mode as a Client Mode that doesn’t connect to infrastructure networks but rather to similarly ad hoc configured devices.

Networking FAQ



Networking FAQ 1: Breaking into the Field


  • What kind of networking jobs are there?
  • What are the different networking specialties?
    • Routing & Switching
    • Security
    • Wireless
    • Voice and Video
    • Data Center
    • Service Provider
  • How do I get into networking?
  • Do I need a college degree to be a networker?
  • Do I need certifications?
  • Do I need to know a programming language?
  • What should I list on my resume?
  • How much do networkers make?
  • How do I find a job?
  • Do you have any interview tips?
  • What are the negative aspects of networking?

What kind of networking jobs are there?

Computer networking is a vast field, rivaling general IT in the diversity of roles it offers. While all areas of networking share a common fundamental level of knowledge, most networkers find that they gravitate toward one or two specialties through a combination of interest and necessity.

Starting at the entry level, there are IT help desk and network operations center (NOC) positions. Depending on the size and breadth of an organization, these two departments might be combined. Most people entering the field out of high school or college will spend their first few years in one of these roles building real-world experience.

At the next level is network administration or operations, followed by network engineering. What’s the difference? Administration generally refers to the maintenance and operation of an existing network: Configuring switch ports, tweaking the office wireless, upgrading firmware, and so on. Tasks that, while certainly necessary and respectable, don’t require a deep knowledge of the technology involved. In many smaller companies, network administration is considered an extension of systems administration.

The engineering side is where the fun and challenging problems lie, and where you’ll need a healthy mix of knowledge and experience to thrive. Network engineering is often divided into junior and senior roles, though there’s no rule for doing so, and a “senior” engineer at one organization might be considered a junior engineer at another. The titles are only a loose approximation of the skill level or seniority required relevant to other positions within the company.

The top tier is network architecture. At this level you seldom deal with the day-to-day operations of a network, and might never even configure devices except to intervene where a serious issue has arisen. The role of an architect is to develop the network to meet changing business needs across a long timeline. This can include evaluating new products and technologies, authoring high-level network designs, compiling budgets, and so forth. The job of architect is sometimes rolled into a network team manager position.

What we’ve covered so far are just the horizontal tiers of the networking career matrix. There are many vertical disciplines across most tiers as well, such as voice, wireless, security, data center, storage, service provider, or automation. Many networkers tend to gravitate to one or two of these specialties once they’ve mastered the fundamentals. Some vendors, like Cisco and Juniper, maintain certification tracks dedicated to certain specialties. Where you end up is a function of your interests and your employer’s needs.

What are the different networking specialties?

Networking as a field is considerably diverse. Through the late nineties and into the 2000s, networking has grown from an extension of systems administration into a field all its own, and over the past decade or so has come to include a number of subfields pertaining to specific technologies and their applications. While there’s no official definition for any of these, the professional community generally recognizes the following distinctions (many of which you’ll likely recognize from vendor certification tracks).

Routing & Switching

Routing and switching comprise the core competencies of networking. This is where most people start in the field, branching out to other areas of interest. R&S skills alone primarily apply to enterprise networks but serve as the foundation for all other concentrations.

Security

Most people equate security with firewalls and intrusion prevention systems, but to excel in this area requires a refined understanding of security policy and how it can be effectively enforced. This extends beyond hardware to include building VPNs, Denial of Service (DoS) prevention, mitigation of spam and phishing attempts, network access control (NAC), and myriad other technologies employed to protect a network and its users.

Wireless

Once a tangent of vanilla routing and switching, the ever-increasing demand for wireless networking has given rise to the formation of a dedicated subfield. To excel at wireless networking, you’ll need a thorough understanding of radio theory, controller and access point design, wireless security, client roaming, quality of service controls, and other concepts unique to wireless communication.

Voice and Video

Real-time communications probably entail the largest subfield of networking. In fact, it’s perfectly plausible to dedicate your entire career to working only with voice over IP (VoIP). Voice experts need to be familiar not only with IP networking and VoIP, but to a large degree also the general telecommunications industry, including legacy phone networks. Video teleconferencing, while perhaps not as widely implemented as voice, is a growing market with very similar design requirements.

Data Center

It might seem strange that “data center” is mentioned as a particular discipline within networking since, after all, most networks incorporate some form of data center. But the density afforded by data centers accommodates technologies not often found elsewhere, including storage networks (SAN), virtualization clusters, and extremely highly available (HA) systems. Data center networks also tend to operate at higher speeds and under much stricter downtime allowances than networks outside of purpose-built facilities.

Service Provider

Service provider networking tends to revolve around long-haul communications and Internet connectivity. Networkers who work for service providers focus more on the paths traffic takes from one network, or autonomous system (AS), to another.

How do I get into networking?

You optimal strategy for pursuing a job in networking depends on your current position in your career. If you’re fresh out of high school or college, your primary concern should be to build real-world experience as quickly as possible. Having an internship under your belt can help put you ahead of other candidates, but you’ll still likely start off with an entry level job at an IT help desk or NOC. This isn’t a bad thing, but most people tend to pursue higher positions as soon as possible not just for the increased compensation, but to escape the unfortunate stress and tedium inherent to these roles.

If you’ve been working in a sister IT field like security or systems administration for a few years and beefing up your network skills, you might be able to land a job as a network administrator or junior network engineer. There might even be opportunity to cross-train into a networking position with your current employer (in which case the biggest challenge can sometimes be finding a suitable replacement for your current role).

If you’re stepping into networking from a different field entirely, the transition is more daunting. Networking – IT in general, really – moves fast and won’t wait for you to catch up. People coming over from unrelated backgrounds can feel overwhelmed by the breadth of new and seemingly ever-changing material to cover. But if you’re genuinely interested in a career in networking, stick with it. As with anything, experience (good and bad) breeds confidence.

Finally, a number of people (myself included) come into the field by way of military service, either having worked in IT during enlistment or commission, or having developed the skills through post-service vocational training.

Do I need a college degree to be a networker?

I hope not. I don’t have one and I’ve been doing this for years. I’ve even been extended offers for positions which clearly listed “bachelor’s degree” under mandatory requirements (these requirements are usually tacked on blindly by HR departments). While a college degree certainly isn’t going to hurt you, the more clued-in hiring managers greatly favor passion and experience over formal education. This is primarily because IT moves faster than college curricula.

For instance, let’s say you enrolled in a four-year degree program in the fall of 2011. At the time, no one had ever heard the term software-defined networking. Yet when you graduate in 2015, it seems that it’s the only thing people are talking about. While you were learning about networking, networking went and changed. Welcome to the world of IT.

SDN_google_trends.jpg

And this is assuming you major in something pertinent to computer networking in the first place. Many universities don’t get more granular with their degree programs than computer science or information security, both of which are relevant, but neither of which will teach you how to build and operate a network.

Do I need certifications?

This is topic of much debate and heated opinions throughout IT. The issue is divided primarily into two camps: Those who believe certifications act as proof of competence in a subject, and those who believe that real-world experience trumps formal assessment. As with most debates, actuality lies somewhere in between.

Few people will dispute that certifications offered by vendors (like Cisco and Juniper) and non-profit organizations (like CompTIA and ISC) offer an excellent path for professional development. While the topics they cover don’t always map to practical skills, it’s nice to have a roadmap showing where you’re headed and to acknowledge milestones by way of passing exams. They’re also a great avenue to evaluate your interest in particular specialties. Many people opt to pursue a single track to the expert level while picking up a couple associate-level certifications in other disciplines to break outside their comfort areas.

The downside to certifications is that they are often over-valued. After all, a multiple-choice test is not a great measure of someone’s skill as an analytical thinker. Some certification exams include lab simulations of real designs and problems, but these are necessarily constrained to very narrow parameters.

There’s also the widespread issue of cheating. Ultimately, a certification is just proof of having passed a test, and people cheat at tests. A number of disreputable companies record and sell copies of exam questions, called braindumps, which greatly dilutes the value of a certification. Some individuals even go as far as to pay someone else to take a test for them. In recent years, Cisco and other certification authorities have even begun requiring candidates to be photographed each time they take an exam.

Do I need to know a programming language?

Probably not, but this requirement is highly subject to change with the growing trend toward network automation. And knowing how to code is an invaluable skill even if not a strict requirement of your desired position. If you’re familiar with a particular programming language already, great! Be sure to keep in practice with it, so that even if you end up needing to learn a new language, you’ll already have a programmer’s mindset.

If you’re not already skilled in a programming language, I suggest learning Python. Python is very friendly to the novice programmer; easy to read and easy to write. It’s also very popular among network platforms, and can double as a scripting language to help expedite tedious tasks. Even if you don’t currently have an obvious need to write code, once you learn how, you’ll be amazed at how many opportunities you uncover to make your work more efficient.

For some examples of how easy it is to write useful Python, check out this blog post.

What should I list on my resume?

This topic is very subjective, and answers vary from country to country, but I’ll offer my advice for candidates seeking a job in the US.

To start with, there are the staples: Your name and contact information, current position (if any), education, and so forth. Don’t list your high school; it’s assumed that you have at least a diploma or GED if you’re applying for a job in IT. Include your college degree if you have one, but specify your major only if it’s relevant to the field. (Anything from computer science to business management can apply, but art history, for instance, suggests that your passion lies elsewhere.)

Include any relevant certifications (you can leave out that Red Cross CPR class) you hold, but only if they’re current. If you’re pressed for bullet points and want to list expired certifications, be sure to clearly annotate that they are no longer valid but that you would be willing to renew them as a condition of employment. (If you’re not willing, don’t list them. They’re gone.)

Work history will vary depending on where you’re coming from. If this is your first entry into the working world, list some projects you’ve worked on either in school or on your own time. If you’re crossing over from a field outside of IT, concentrate on skills and responsibilities that are transferable to the position you want. If you’re already working in a sister field, go ahead and list details about your work even if they’re not directly related to networking. A good recruiter or hiring manager will be able to form an idea of your competencies regardless.

Feel free to list out the skills in which you’re proficient, but don’t get carried away. Sometimes it’s beneficial to list out specific protocols and technologies to trigger keyword filters within application processing software, but before long your resume will begin to look like a bowl of alphabet soup. Use acronyms sparingly, and leave out what can be fairly assumed. For example, if you’ve listed experience working with server virtualization, the reader will infer that you know how IEEE 802.1Q works, so there’s no need to list it.

Though it may be tempting, don’t present topics with which you’ve only flirted as skills in which you’re proficient. If you explain that you’re not yet skilled at something but are actively trying to improve, it conveys ambition and potential. But if you inflate your abilities and your bluff gets called in an interview, you’re out for sure. (Remember, anything listed on your resume is fair game in an interview, so be sure to set the interviewer’s expectations appropriately.)

There are several items you want be sure are not included on your resume. The first few should be obvious: Don’t list your date of birth (or any other explicit indicator of age), race, religion, sexual orientation, political affiliations, intention to overthrow the federal government, or medical conditions. Don’t list contact information for your personal references, but have the list ready in a separate file to be furnished upon request. Don’t include any hobbies or interests unless they’re at least partially relevant to the field. (For example, even if you’re applying for a NOC position, it’s reasonable to note an open source software project you’re involved with.) And while it’s perfectly fine and encouraged to note raises and promotions in your work history, don’t explicitly list your compensation.

Oh, and a note about grammar and spelling: This is perhaps the single most important document to your career. Proofread. Again and again. For every typo I find on a resume, I toss the whole thing in the trash. That may sound harsh, but attention to detail is a crucial skill in this field. If composition isn’t your strong suit, get someone else to proofread for you.

One more tip: Don’t save your resume with the file name “Resume.” That’s what everyone else calls their resume, too. Save it as your full name to make it easier for a recruiter juggling hundreds of resumes to pick out.

How much do networkers make?

You didn’t just scroll through this article until you got here, did you? I hope not.

This is the burning question everyone wants to know, for beyond all those books and exams must lie a field of riches just waiting to be harvested, right? The truth is that networking is like any other field: pay and benefits vary greatly from one geographic region to another, from one industry to another, and from one company to another for any given position. It’s a harsh truth, but you’re worth what the market says you’re worth.

There used to be this perception that achieving a given network certification would guarantee a minimum salary. While that may have been true at one time (and it probably wasn’t) it certainly isn’t today. IT is a much more mature field than it used to be: It’s not hard to find someone who knows how a router or switch works. No one is deploying a wireless network for the very first time. Your compensation in this field depends on how well you apply yourself, how aggressively you pursue new opportunities, and, unfortunately, no small amount of luck and timing.

Don’t be discouraged if you hear others boast about their lush salaries. Some people just get lucky. Some people also neglect to mention that they work sixty-hour weeks. And don’t forget to factor in cost of living: A networker making $80,000 in a major city is likely worse off than one making $60,000 in the suburbs. And there may be extenuating circumstances: I was making a six-digit salary in my early twenties. The catch? I was employed a defense contractor in Iraq not too far from people who would have liked very much to kill me.

“Okay, I get it… But how much can I make, really?” Alright, fine. If you want to get a better idea of what to expect with regard to compensation, Indeed.com is a good place to start. You can search for postings by keyword and geographic area, and get a rough idea what the going rate for a position might be. But please keep in mind that these are very generalized, error-prone estimations generated from an incomplete data pool (many posted positions don’t include salary information). The US Bureau of Labor Statistics also publishes national wage data sorted by occupation, if you’re keen to dive into that pile of raw data.

The time of year during which you apply and the current economic state can skew your chances as well:

indeed_salary_trend.png

The chart above shows average salary fluctuation for postings matching “CCNP” over a two-year period. Note how widely the going rate for a CCNP varies from month to month. Unfortunately, you need a job when you need a job.

Finally, don’t forget to account for non-monetary compensation offered, like health insurance, paid vacation, and employer-paid retirement fund contributions. These may not seem like much, especially for young adults, but a 5% employer 401K match or lower than average health insurance deductible can equate to a sizable salary bump when compared against a position with lesser benefits.

How do I find a job?

The first stop for most people is uploading their resume to a major job aggregation sites like Indeed, Monster, and Dice. While these sites are certainly useful, keep in mind that your special, unique resume ends up in a sea with thousands of other special, unique resumes. Also be prepared to ignore solicitations from people offering “franchise opportunities,” “sales positions” (pyramid schemes), or the chance to make thousands a week working from home! Perhaps the worst though is the occasional deadbeat recruiter who hasn’t even bothered to read your resume, and just wants to match a body to a position and collect his or her commission as quickly as possible.

Once you’ve published your resume, start applying to individual openings. And not just through job sites: Many companies, especially smaller ones, don’t even bother posting open positions to aggregators. Search for appealing companies local to your area and check the “careers” page on their corporate site. Easily 80% of these companies you will never hear from again, not even to say “thanks but no thanks,” but try not to get discouraged. Just keep applying.

Unfortunately, the old adage is usually true: It’s not what you know, it’s who you know. The best looking resume in the pile will have a hard time competing with a phone call to a friend already employed in a position of power within the company. Especially at smaller companies, it’s common for managers to ask their staff if they know anyone who would be a good fit for an open position before it’s ever posted for application by the public. This both expedites the hiring process and guarantees that the candidate is of reasonable character. (Few people will recommend hiring someone they don’t want to work with.)

So, now it’s time for that other kind of networking. People networking. Talk to your friends and mentors, see who’s hiring. Hit up Twitter and Facebook every so often and let people know you’re still looking. LinkedIn can be worth a shot as well, so long as you don’t start spamming invitations to people you don’t even know. Whatever channels you choose, remember to make a case for why people should want to hire you, and not just because you want a paycheck. Post a link to your resume if you’re comfortable with it, but always include at least a brief synopsis of where you’re coming from and where you want to go in your career.

Do you have any interview tips?

This advice is pretty standard and available everywhere, but I’m including it here because experience suggests that some people could use a refresher.

Do your research. Learn about the company you’re interviewing with. Find out how long they’ve been in business, where their offices are located, what their core business is, and the challenges they face. Make note of any new products introduced lately that might be a topic of conversation.

Brush up on your resume. Remember that skill you listed on your resume that you haven’t used in three years? Brush up on it before the interview in case it comes up. (Remember, anything listed on your resume is fair game during a technical evaluation.)

Prepare questions. Always have a list of questions ready to ask when the time comes. It’s perfectly acceptable to write these down ahead of time. Ask specific questions about the company and the position, even if only to confirm your assumptions. Skip any topics that were already covered earlier in the interview unless there’s a reason to go into more detail.

Be on time. This is the easiest thing you can do to give a good first impression. Allow yourself plenty of time to arrive at the interview on-time. If you’re going to be late due to circumstances beyond your control (e.g. a traffic accident, zombie infestation, etc.) notify the people you’re meeting with or your recruiter as soon as possible. Arriving late to an interview without prior notice is essentially telling the interviewers that you don’t value their time.

Be confident. A lot of people struggle with this one. Technical interviews can be very intimidating, especially when meeting with a large number of people. Don’t second-guess yourself. Be confident in your answers. But also don’t brag; no one likes a cocky candidate.

Be honest. Trailing on and on in search of the correct answer to a simple question is as painful for the interviewer as it is for you. If you don’t know the answer to a question, say so. Share what you do know offhand about the topic, and describe what your next steps would be to find the answer.

Write a follow-up note. After the interview is over, send a follow-up note to the people you spoke with a day or two later to thank them for their time. Offer more detail on any answers you struggled with during the interview to show that you’re capable of research. This isn’t strictly expected but it does help you stand out as a candidate and reaffirm your interest in the position.

Learn from the experience. Even a terrible interview is experience to be applied at the next one. Make a note of any areas where you think you can improve. Work on these in preparation for the next interview.

What are the negative aspects of networking?

Networking is an awesome job, but like any other field in IT it’s not without significant challenges and frustration. The biggest of these in my experience has been the tendency of colleagues to immediately blame the network for any problem. Internet access feeling sluggish? Must be the network. Web server returning 404s? Sounds like a network problem to me. It’s raining out and you left your car windows down? Damn network guys didn’t warn you.

The tendency of others to fault the network without evidence stems from an ignorance of how the network actually functions. And to a degree, this is understandable; we all have our respective areas of expertise. To many of our colleagues, the network is just this mysterious black box everything plugs into. Packets go in at one point, magic occurs, and they pop back out someplace else.

As such, it’s easy to fault the black box when things don’t go as planned. As a networker, be prepared to “prove it’s not the network” even when there’s no evidence to suggest that it is. Typically a packet capture is enough to satisfy the demand for due diligence on your part. (Of course, it’s always necessary to follow up on even seemingly frivolous accusations, because occasionally it really is the network.)

Another annoying aspect of networking, and IT in general, is that the network doesn’t sleep. It doesn’t have weekends off and it doesn’t go on vacation, and it has absolutely no regard for your time should you choose to. Things can break at any time, and a large enough disruption ensures that your personal schedule will be interrupted very soon thereafter. (Incidentally, not wanting to get woken up at 3 AM is great motivation to carefully design networks to be extremely resilient to failure.)

Most organizations of moderate size maintain an on-call rotation to handle after-hours issues. If placed on-call, you’ll be responsible for any after-hours issues that come up during the duration of your scheduled duty (typically one or two weeks). But the trade-off is that you (probably) won’t be bothered about emergency network issues when you’re not on-call.

While we’re on the topic of after-hours work, even planned changes can be a drag. While you at least know when they’re going to happen, service-impacting maintenance often needs to be scheduled very early in the morning or very late at night, when few customers are likely to be using the service. Downtime windows are generally unavoidable in most small- to medium-sized organizations lacking sufficient redundant infrastructure, but most managers are willing to trade compensatory time off from the regular work day in exchange for after-hours work.

Finally, many people simply find the pace of the field exhausting. Technology is always evolving, and you’ll be expected to keep up. But this is also an aspect of the field to be embraced: There’s always something new to learn, something to get better at. It helps to keep your skills fresh and your mind sharp.


Networking FAQ 2: Certifications


  • What are the most popular certifications in networking?
  • How much is certification X worth?
  • How should I study for certification exams?
    • Books
    • Training Videos
    • Instructor-Led Classes
    • Exam Simulators
    • Lab Practice
  • What’s a “brain dump?”
  • What is the exam experience like?
  • I only just barely passed! Does it still count?
  • My employer will pay for me to get a certification. Should I do it?

The most popular, or most common, certification track in networking is Cisco’s routing and switching series, which comprises the CCENT, CCNA, CCNP, CCIE. Most networkers obtain the CCENT or CCNA in routing and switching as their first certification, and many progress upward or outward from there. Juniper maintains its own line of certifications roughly in parallel to Cisco’s. Although there isn’t quite as much demand for Juniper certifications at the entry level, the JNCIE is reasonably sought after. There’s also CompTIA, which offers the vendor-neutral Network+ certification, although historically there hasn’t been much meat to this cert.

Of course, a number of other companies also sponsor certification programs pursuant to their own market interests: These just seem to be the most popular. There are also a number of companies and non-profit organizations which offer certifications focused on security, wireless, virtualization, and other niches that are of value to many networkers.

How much is certification X worth?

A lot of newcomers to the field get the idea that a certification will guarantee them a certain position or salary. Unfortunately, this is not the case. Remember, in essence a certification is just a good reference: The certification sponsor is vouching for your abilities to the extent they were evaluated by whatever tests you passed. It has value to a potential employer only if he trusts the integrity of the certification and he is in need of someone with that specific skill set. For example, a Cisco CCNA certification isn’t very appealing to a company whose network is comprised mostly of Juniper routers and switches.

A lot of candidates are disappointed to learn that they aren’t entitled to the same salary as a friend with the same certification. The most commonly overlooked factor in compensation is location: A professional working in New York City might easily take home twice the income of someone doing the same job in a small town, an income disparity driven by regional differences in both demand and cost of living. Also keep in mind that experience generally trumps certification, as skill demonstrated through the execution of real world projects carries much more credibility than a good grade on a test.

While it’s impossible to put an exact dollar value on a certification (regardless of what training companies would have you believe), you can roughly gauge the value of one certification to that of another by checking how frequently each is listed as a requirement on job openings. This is hardly a scientific study, but it can lend a nudge in either direction when deciding which of two certifications would be more beneficial to pursue.

How should I study for certification exams?

There are plenty of study resources out there for just about every certification, and it can be difficult to tell which approach is ideal for you. There are a few core types of training material you’ll want to consider.

Books

Your first step in studying for a certification should be to purchase a book or two pertaining to the topics the certification covers. Just search for the certification name on Amazon and you’re likely to find several titles from various publishers. O’Reilly, Cisco Press, and Sybex (Wiley) are all very well established brands and usually a safe bet. You might also come across gems from lesser-known publishers. Always read customer reviews before purchasing a book, and make sure you’re buying the most recent edition applicable to the exam(s) you plan to take.

Books are recommended as the primary study material for a certification because they provide a thorough overview of the content on which you’ll be tested. They provide an economic method to judge your level of preparedness before spending money on more costly study tools like exam simulators and instructor-led classes. They also provide much more in-depth content for you to establish fundamental competency with the exam material.

Training Videos

Prerecorded training videos such as those from CBT Nuggets, INE, and iPexpert, and others offer a nice compromise between books and live classes, both in content and in price. Some people prefer videos because they tend to focus better when they hear a person talking to them and have something to watch rather than having to trudge through pages of stagnant text. Video training generally doesn’t go into as much detail as books due to time and production constraints, but it can serve as a great refresher for topics you haven’t visited in a while. Video training usually costs more than books but substantially less than live classes.

Instructor-Led Classes

Live classes or “bootcamps” are the most costly preparation method (some even include a voucher for the certification exam), but they also have the unique advantage of a human instructor who can provide immediate answers to your questions. Most classes run for one or two weeks at a time and need to be scheduled well in advance. Some classes can cost several thousand dollars, so be sure to shop around and check out reviews from prior students before writing a check (or asking your boss to).

One downside to instructor-led classes is that they don’t always operate on your schedule. Some training providers will claim that they run classes regularly, but abruptly reschedule you if they don’t have enough students to justify the cost of hosting the class for a given week. Be sure to get a commitment in writing to the dates you’ve chosen if your schedule isn’t flexible.

If you have no prior experience with the topics to be covered, never start with a live class: You only have a fixed number of hours during which you can take advantage of the instructor, and these are far better spent solidifying existing knowledge and filling in gaps than learning fundamentals. At a minimum, read a book or two on the topics to be covered before scheduling a class so you can hit the ground running on the first day.

Exam Simulators

Exam simulators are applications which try to replicate the experience of taking an actual certification exam. They contain questions and exercises similar to what you’ll see on the real test, but also provide answers and explanations that help reinforce study. Most offer a timed exam mode, wherein you’ll need to answer a set number of questions correctly to pass. You’ll also have the option to concentrate on specific topics to improve areas of weakness. Some basic simulators come packaged with study books; others are available as standalone products. The standalone simulators can cost several hundred dollars (US), so you’ll need to decide whether they’re worth it.

Lab Practice

The most reliable way to evaluate your skills is to try them out on real network gear. Several training companies rent out hosted labs by the hour, or you may decide to build your own (we’ll cover building a home lab in a later article). Lab practice is highly recommended, but only after you’re reasonably familiar with the theory and configuration concerning the features and protocols you want to implement. This is especially important if you decide to rent lab time: You’ll want to avoid wasting time flipping through books while your lab session is in progress.

What’s a “brain dump?”

Some unscrupulous companies have taken to selling pirated copies of actual certification exam questions. They pay individuals to take an exam and write down all the questions they can remember immediately afterward, hence the term brain dump. More advanced schemes go so far as to employ video recording devices to capture the entire exam. The stolen material is reformatted and altered in a half-hearted attempt to obscure its origin, and marketed to prospective test takers as legitimate study material. Brain dumps are widely held responsible for the declining value of IT certifications because they allow individuals to pass exams with little or no comprehension of the material.

You can spot brain dump companies by watching for advertising focused on exam pass rates and low preparation times. They also like to boast about the number of “questions and answers” in their pools, rather than their depth or breadth of coverage. Providers of legitimate training materials will emphasize comprehension of the material and disciplined practice. As with any purchase, research your options and look for reviews from prior customers on neutral forums (not testimonials listed on the vendor’s own web site). Give the company a call and ensure that you can talk to someone knowledgeable about the certification you want to pursue.

For more background on brain dumps, check out this explanation by CertGuard.

What is the exam experience like?

First, are you confident that you’ve mastered all the material on which you’ll be tested? You shouldn’t feel like you’re rushing into the exam. Before scheduling the exam, you should be comfortable answering review questions even before checking the answer choices. If you’ve been using an exam simulator to study, you should be scoring at least 90% consistently (remember that the questions on the exam will differ from those which have become familiar by now). And don’t make the mistake so many people do of ignoring that one nagging topic you just can’t get a handle on: It will show up on the test. Repeatedly.

The majority of certification exams are proctored and can only be taken at a qualified testing center. Once you’re sure you’re ready for the exam, you’ll need to locate a testing center where the exam is administered and schedule a date and time to take it. Determine which testing provider offers the exam you want to take, and use its web site to find a testing center near you. For example, if you’re testing for the CCNA, you’ll need to find a Pearson VUE testing center which offers the appropriate exam. Most locations are not dedicated testing centers, but rather independent businesses which contract as exam proctors on the side. These are usually private IT training companies, community colleges, or consulting firms. Occasionally, the organization’s primary business might be entirely unrelated to IT certifications (a friend of mine once certified at a tax preparation office). Just be sure that the center is properly accredited by the testing provider before scheduling an appointment.

When selecting an exam time, be sure to allow yourself plenty of travel time, especially if it’s far away. If traveling during morning or evening rush hour, be sure to pad your anticipated commute time appropriately. Plan to arrive at least 15 minutes before your scheduled time, or longer if you’re unfamiliar with the area. If you arrive late, you may be refused admission and forced to forfeit the exam cost. (Most testing providers require you to reschedule or cancel your exam no less than 24 hours prior.) You’ll be asked to pay for the full cost of the exam online when you make your appointment.

When the day comes, be sure to grab something to eat before testing, especially if testing in the early morning or right after work. Avoid fast food or anything you don’t normally eat. It’s advisable to skip coffee and energy drinks as the caffeine will only amplify any feelings of anxiety. Remember to take with you a valid government-issued photo ID such as a driver’s license. You might also be asked to furnish a second piece of ID (credit or debit card, library card, college ID, concealed carry permit, passport, etc.) for extra verification.

Once you arrive at the testing center, you’ll be asked to present your ID and sign in. You may have to sign a confidentiality agreement affirming that you won’t share test material. You might also be asked to have your photograph, signature, and even fingerprints taken. (Some certification sponsors have mandated these extra security measures in recent years to combat the growing problem of exam fraud.) You will likely be asked to secure your personal effects (cell phone, wallet, keys, etc.) in a locker while you take the exam. This is to prevent people from accessing hidden notes or recording the exam material.

Stop and reassess your physical condition at this point. Grab a cup of water or hit the restroom now if you think you might need it: Once the exam begins, you won’t be permitted to leave the testing room. The proctor will brief you on the exam, ask if you have any questions, and then you’re on your own!

Exam experiences obviously vary from one test to another, but generally speaking they’re pretty boring. Just take your time to read and digest each question completely. On multiple-choice questions, see if you know the answer without looking at the provided options first. Don’t second-guess yourself and don’t read too much into the question; your gut is usually right. Don’t waste time on questions you don’t know: cut your losses, or (if the exam allows) go back and answer it later. If you experience any technical issues with the exam (the interface freezes, or a diagram gets cut off by the edge of the screen, for example) inform the proctor immediately.

Most exams will reveal your score and pass or fail status within a few seconds after completing the test. Try not to vomit during this interval. Hopefully you’ll be greeted with a happy “PASS” message! But if not, try not to get bummed out; you’ll live to fight another day. Either way, the results at this point are final, and you’re free to go. The proctor should assist in recovering your personal effects and provide a printout of your score before you leave.

A few certifications, such as Cisco’s CCIE, impose considerably more daunting practical lab exams. There have been plenty of stories published by people who have attended a CCIE lab. Here’s a great account by Bob McCouch.

I only just barely passed! Does it still count?

What do you call the guy who graduates bottom of his class in medical school? Doctor.

A lot of test candidates who just barely achieve the minimum passing score by one or two questions feel like they didn’t really earn the certification. Remember that certification exams are boolean in nature: You either passed or you didn’t. And if your score is equal to or greater than the minimum score, you passed. If the exam sponsor wanted the score to be three points higher, it’d be three points higher. Forget about the score and celebrate your accomplishment!

My employer will pay for me to get a certification. Should I do it?

Be careful with this. Many people see this as an opportunity for a free certification, but be sure to weigh the risks against its potential benefits. Many employers require you to front the cost of the exam and reimburse you afterward, but only if you pass. You might fail the exam and get stuck with the bill. This is an acceptable risk if you wanted to pursue the certification anyway, but not if you’re doing so only at the request of your employer.

Also check whether your employer is expecting a written commitment from you in return for sponsoring your study. This isn’t usually a concern for entry-level exams, but studying and testing for some higher-level certifications can cost thousands of dollars, and your employer may rightfully want to ensure you don’t jump ship right after they finish paying for your certification. If you’re not comfortable with this commitment, ask to negotiate a repayment arrangement should you decide to leave the company in the near future.


Networking FAQ 3: Names and Addresses


  • Where do IP addresses and domain names come from?
  • Did we really run out of IPv4 addresses?
  • Can I buy more IP addresses?
  • Does IPv6 really provide a bazillion addresses?
  • Why does IPv6 use hexadecimal addressing?
  • What is IPAM?
  • How do I create an IP addressing scheme?
  • How does IPv6 subnetting work?
  • What prefix length should I use on point-to-point links?
  • How should I name devices on my network?

Where do IP addresses and domain names come from?

Ultimate authority over names and addresses on the Internet comes from the Internet Corporation for Assigned Names and Numbers (ICANN), which was originally founded as a not-for-profit corporation created in 1998 as a result of the United States government’s initiative to privatize control of the Internet. According to its bylaws, ICANN has three core responsibilities:

  1. Coordinate the allocation and assignment of the three sets of unique identifiers for the Internet, which are:
    (a) Domain names (forming a system referred to as “DNS”);
    (b) Internet protocol (IP) addresses and autonomous system (AS) numbers; and
    (c) Protocol port and parameter numbers.
  2. Coordinate the operation and evolution of the DNS root name server system.
  3. Coordinate policy development reasonably and appropriately related to these technical functions.

ICANN delegates control over most of the global DNS hierarchy to a number of independent registries which manage the hundreds of top-level domains (TLDs). For example, VeriSign currently maintains the .com and .net TLDs, while .org is maintained by the Public Internet Registry (PIR). (A complete listing of current TLDs and their registries is available here). Each registry is responsible for maintaining the database of domain names belonging to its TLD(s). Note that some TLDs, such as .gov and .edu, impose certain restrictions on organizations which can register domain names.

However, most registries don’t register new domain names to end users directly. This process is delegated to ICANN-accredited registrars. Registrars act as middlemen between an individual or company registering a domain name for use and the registry responsible for the TLD to which the domain name belongs. In exchange for a small fee (usually around ten to thirty US dollars per year depending on the TLD), registrars inform the registry of the creation or modification of the domain name and maintain the relevant contact information for it.

There are literally thousands of commercial registrars which offer domain registration services around the world. Many web hosting providers offer domain registration as part of their hosting services. Some registries, like VeriSign, also function as registrars, but not all. While independent registries are contracted for maintenance of most TLDs, maintenance of the critical root DNS zone is left to a subordinate body of ICANN, the Internet Assigned Numbers Authority (IANA).

In addition to running the DNS root zone, IANA is responsible for delegating the assignment of IPv4 and IPv6 addresses from the global pool via Regional Internet Registries (RIRs), not to be confused with DNS TLD registries. There are five RIRs at the time of this writing, each servicing one or more geographic regions:

An Internet Service Provider (ISP) or end user obtains IP address allocations from its appropriate RIR, or from an intermediate national or local Internet registry. RIRs are also responsible for the assignment of autonomous system (AS) numbers, which are used to uniquely identify an organization on the Internet.

IANA also manages protocol number assignments. By far the most familiar of these are TCP and UDP port numbers. For example, TCP port 80 is assigned for HTTP, whereas UDP port 53 is assigned for DNS. IANA also administers assignments including IP protocols, Internet Control Message Protocol (ICMP) type codes, Simple Network Management Protocol (SNMP) number spaces, and much more.

Did we really run out of IPv4 addresses?

Yes, but we only had a couple decades of warning. On February 3, 2011, ICANN allocated the last five remaining /8 spaces, one to each of the five RIRs. Since then, it has become increasingly difficult to secure new IPv4 space. Many organizations have begun implementing IPv6, but its adoption rate has not been as quickly as has been hoped. And it will be a long time before we actually start moving not just toward IPv6 but away from IPv4.

Can I buy more IP addresses?

In short, yes. I haven’t had much experience with the process personally, but Lindsay Hill has published an excellent account.

Does IPv6 really provide a bazillion addresses?

An IPv6 address is 128 bits long, which in theory yields 2128 unique addresses. People love to make outlandish and pointless analogies using this number (“That’s enough to address every grain of sand on Earth!”) but in practice we’ll end up wasting far more addresses than we use.

For starters, most network segments will be assigned a 64-bit prefix regardless of how many end hosts they contain, to ensure compatibility with SLAAC and to simplify addressing schemes. So even on large multi-access segments we’ll only ever use a few thousand out of 264 possible addresses. The good news is that such waste is nothing to fret about; it’s actually part of how IPv6 is intended to work.

Additionally, the largest allow allocation for most organizations is a /32. If all of your segments are addressed as /64s, that only leaves 32 bits of space in which you have say over how networks are allocated. But hang on: 32 bits is equivalent numerically to the scope of the entire IPv4 world, just for your network! And on top of that, host addresses are essentially free because you’ll never exhaust the space of a /64.

So no, we can’t give every grain of sand on the planet an IPv6 address, as badly as we might want to. But the effectively infinite segment size coupled with the exponentially increased prefix space ensures that you should never find yourself backed into a corner when drafting an addressing scheme.

Why does IPv6 use hexadecimal addressing?

Odds are that you first learned about IPv4 and subnetting, and then moved onto IPv6 and its crazy-long addresses with letters in them. IPv4 addresses are expressed in decimal as 8-bit chunks separated by periods, called as dotted-decimal notation, whereas IPv6 addresses are expressed in hexadecimal as 16-bit chunks separated by colons. Hexadecimal was chosen for IPv6 because it allows for more compact addressing: Every byte is expressed with only two digits. This is in contrast to decimal, which can use up to three digits to express the value of a byte.

Consider the IPv6 address 2001:db8:4000:2d04:0002:55ff:fe47:b3c9. We could express this value in dotted-decimal format as 32.1.13.184.64.0.45.4.0.2.85.255.254.71.179.201, but that takes a lot longer to write or say. We could also express the IPv4 address 192.0.2.75 as c000:024b if we wanted to, but no one would recognize it as an IPv4 address.

Ultimately, both types of address are just binary strings (32 bits for IPv4 and 128 bits for IPv6). They only look different because we choose to write them differently.

What is IPAM?

IP address management (IPAM) entails tracking the allocation and utilization of IP address space on a network. This can be as simple as using a spreadsheet to record the static assignments of individual IPs within a subnet, or as complex as deploying a dedicated IPAM application to manage assignments across a global network. Many IPAM products integrate with DNS and DHCP services to keep these systems synchronized automatically. Most organizations start tracking IP allocations in a spreadsheet and move to a dedicated application as the network grows. There are many open source and commercial IPAM products available to choose from.

How do I create an IP addressing scheme?

Early in your career, you’ll likely be working on networks with established infrastructure and addressing schemes. These might be decent, or they might be terrible, but they’re there. You should always try in earnest to abide by existing schemes and policies where prudent, even if they’re not ideal.

For example, suppose convention has been to allocate a /16 prefix out of the 10.0.0.0/8 RFC 1918 space for every new site on your network, regardless of its size. This approach could be made more efficient by varying the size of new allocations as appropriate to the size of each site, but deviating from the policy now might compromise strategies that were enacted back when the existing scheme was drafted. It might be best to continue with the current obtuse allocation policy until a sufficient effort can be put forth to overhaul addressing across the entire network.

That said, sometimes it’s necessary to develop a new IP addressing design from scratch, whether for a greenfield deployment or to migrate away from a legacy scheme. While it would be impractical to discuss here all of the caveats you might encounter when laying out an address plan, there are a few solid guidelines that should keep you from shooting yourself in the foot.

  • Efficient aggregation is key. The ability to efficiently summarize routes is crucial to a healthy, stable network. Allocate networks so that they can be summarized intuitively by function or geographic area. For example, if assigning a campus network out of 172.16.0.0/16, you might assign each building a /20, and number each floor with a /24 within that /20 sequentially. While perhaps not as efficient, this is preferable to numbering all floors sequentially, as that approach weakens your ability to summarize by building.
  • Allow for future growth. Your addressing scheme should never be fully allocated at inception; you’ll always want to leave room for growth, whether that means additional buildings or larger segments or whatever. One good strategy is to, for each prefix you allocate, reserve the subsequent prefix for future use. This allows you to simply double the size of any prefix if it needs to expand in the future. So if you allocate 192.168.118.0/24 to a segment, go ahead and reserve 192.168.119.0/24 for future use.
  • Never assign networks by “natural” boundaries. It’s a rookie mistake to address networks using natural decimal increments, which are not binary-friendly. For example, numbering networks as 192.168.10.0/24, 192.168.20.0/24, 192.168.30.0/24, etc. The same goes for numbering by VLAN ID. These networks don’t aggregate at all.
  • Don’t forget infrastructure! One common pitfall is to painstakingly plan out every access segment, only to realize you’ve completely overlooked the network infrastructure itself. Be sure to set aside address space for point-to-point links, loopbacks, management, and other internal functions. Many people opt to address infrastructure out of a separate parent prefix, especially when using private addressing.
  • Plan for IPv4 and IPv6 in parallel. Every segment in your network should have both IPv4 and IPv6 allocations, even if you’re not using IPv6 yet. Planning for IPv6 now will avoid duplicating effort in the future.

How does IPv6 subnetting work?

Technically speaking, there is no such thing as IPv6 subnetting. When IPv4 was first developed, network size was determined by address class. We very quickly realized that this would not scale well, so the subnet mask was introduced just a few years later in 1985 (seeRFC 950) and we’ve been using subnets ever since. However, IPv6 never included the concept of classful networks, so by extension there’s no such thing as an IPv6 subnet. Instead, we refer to IPv6 networks as prefixes (a term which is also appropriate for IPv4 networks).

That said, “subnetting,” or the manipulation of prefix length, works with the exact same logic in IPv6 as it does in IPv4. Extending the prefix length by one bit doubles its size; removing one bit halves it. Although doing the math by hand requires converting addresses to binary (just as with IPv4), networkers are strongly encouraged to allocate addresses along nibble (four-bit) boundaries where feasible. For example, RIRs like ARIN and RIPE only allocate address space as /32, /36, /40, and so on. This allows for easy evaluation of prefix masks, as the mask will not split any hexadecimal characters.

Historically, point-to-point links we numbered with IPv4 using /30 prefix lengths. This was because the first and last IP address (as in any subnet) were deemed unusable as endpoint addresses. However, this was an extremely inefficient approach, as only 50% of addresses could be used: Each link consumed twice the IP space needed.

Fortunately, virtually all modern routers support the use of /31 prefixes for point-to-point links, which was first standardized in RFC 3021 back in 2000. This allows us to achieve 100% addressing efficiency, as each link requires and consumes only two IPv4 addresses. For example, the two end points of a link numbered with 192.168.0.0/31 would be 192.168.0.0 and 192.168.0.1. The 192.168.0.0 address might seem unnatural, but rest assured it’s entirely valid with a /31 mask. So is 192.168.0.255/31.

Point-to-point links should be addressed with IPv6 in a similar manner, utilizing a /127 prefix. This was one time considered poor practice, however is now recommended per RFC 6164. Some network operators opt to configure links with a /127 prefix but to only allocate the first /127 out of a /64 for each link, rather than allocating sequential /127s. This keeps the link addressing very clean, since only the first two IPs will ever be used; for example, 2001:db8:ab:cd::/127 and 2001:db8:ab:cd::1/127. This might seem wasteful, however it’s perfectly acceptable and even encouraged to allocate a single /64 per segment regardless of its size.

How should I name devices on my network?

Your approach to network naming will depend largely on personal taste and how much freedom you’re allowed. Sometimes you will be required by policy to adopt the naming scheme of a parent entity or partner company. In other cases, the sheer breadth of your network might necessitate a naming scheme more complex than you’d prefer. While you can’t always control how your network devices are named, here are some tips for creating an organized hierarchy should you be afforded the opportunity.

  • Make liberal use of subdomains. If you have a number of small, geographically disparate sites, create a zone for each of them so that the hostnames for similar devices match across sites. For example, mail01.abc-east.example.com and mail01.abc-west.example.com. This prevents individual hostnames from becoming too unwieldy. But while it might make sense to go as far as creating one subdomain per building on a large campus, I’d probably stop short of creating one per floor. You’ll have to decide what degree of granularity is appropriate for your network.
  • Names should imply function. Some people like to serialize device names and use an external database to correlate function and location, but that’s a huge pain in the ass if you have to look things up manually. Use names like “web” for HTTP servers, “db” for databases, “dns” for DNS, and so on. Use a generic label like “app” for servers which host multiple applications.
  • Never name devices by brand. One rookie mistake is to confuse a device model with its function. For example, don’t name your Cisco ASA firewalls asa1 and asa2. This would requires the devices’ names to be changed if replaced with a different model of firewall (an option you always want to leave open).
  • Naming schemes should be predictable. If you know there are four access switches at an office, they should be named something like switch01, switch02, switch03, and switch04, not Homer, Marge, Bart, and Lisa. While I appreciate the Simpsons reference, these names imply nothing about when the switches were installed or the position of each in the network hierarchy. (And what happens if you have to add three more switches? Which characters would you pick next?)
  • Add zero padding where appropriate. You might have noticed that I zero-padded the switch names above to two digits even though there are only four switches. Padding offers a high degree of confidence that you’ll be able as many switch as you’ll ever need (up to 99) and still have names sorted properly. Consider what happens if you have eleven switches and don’t pad their numbers: Alphabetically, they would be sorted as switch1, switch10, switch11, switch2, switch3… Zero-padding keeps things tidy.
  • Use standardized abbreviations for geographic locations. Many organizations opt to use the three-character International Air Transport Association (IATA) codeassigned to the nearest airport of a site. For example, a site in Ashburn, Virginia, would be designated IAD (the IATA designator for Dulles International Airport). If you have multiple sites in the same region, consider adding a numeric index to the code (IAD1, IAD2, IAD3…). Or if you need further granularity, consider adopting the UN’s LOCODE system, which appends a unique three-character location code to a country’s two-character code. Under LOCODE, the Ashburn site would be designated US-QAS. (The codes aren’t always pretty, but they work.)

Networking FAQ 4: Fundamentals


  • At what OSI layer does protocol X operate?
  • What’s the difference between a router and a multilayer switch?
  • What’s the difference between forwarding and control planes?
  • What’s the difference between MTU and MSS?
  • What’s the difference between a VLAN interface and a BVI?
  • How do tunnel interfaces work?
  • What do NAT terms like “inside local” mean?
  • Can I use the network and broadcast addresses in a NAT pool?
  • Why do we need IP addresses? Can’t we just use MAC addresses for everything?
  • Does QoS provide more bandwidth?

At what OSI layer does protocol X operate?

The Open Systems Interconnection (OSI) model is one of the first things you learn about networking. It’s a seven-layer reference model officially defined in ISO/IEC 7498-1 and reprinted in every certification study book ever published. It serves as a common point of reference when discussing how protocols relate to and inter-operate with one another. For example, we know that TCP is a layer four protocol, and therefore it sits “on top of” IP, which is a layer three protocol.

But what does that really mean? Who decides what layer a protocol belongs to? The OSI model was originally conceived back in the 1970s as a component of the OSI protocol suite, which was positioned as an early competitor to the emerging TCP/IP family of protocols (spoiler alert: TCP/IP won.) Except for a handful of survivors (most notably the IS-IS dynamic routing protocol) OSI protocols are not in common use today. The reference model which was to govern how these protocols operated, however, lives on. So, we end up trying to assign protocols from one family to layers originally defined for another.

For the most part, this works out alright. TCP and UDP ride on top of IP, which rides on top of Ethernet or PPP or whatever. But protocols don’t always fit the mold: MPLS, for example, is sometimes referred to as “layer 2.5” since it neither provides framing nor provides end-to-end addressability. (Unlike IP addresses, MPLS labels are swapped at each hop along a path as a packet transits a network.) Of course, inventing a layer between two other layers defeats the purpose of a standardized reference model in the first place, and just belies how dependent some people are on reducing every logical concept to a number.

Technically speaking, no protocol from the TCP/IP stack has an official assignment to an OSI layer, because they’re not of the same family. Apples and oranges. A reference model is just that: a reference. It helps illustrate the dependencies protocols have on another, and where they sit in relation to one another, but it doesn’t strictly govern their function. To give the concept any more weight than that is to miss its purpose entirely.

But if anyone asks, MPLS is a layer three protocol.

What’s the difference between a router and a multilayer switch?

Back in simpler times, a router was a device that forwarded packets based on their IP addresses and offered a variety of interface types: Ethernet, T1, serial, OC-3, and others. Conversely, a switch was a device which forwarded packets (or frames, if you prefer) based on their MAC addresses and included only Ethernet ports.

Since the early 2000s the industry has seen two major trends which have greatly upset this understanding. First, the introduction of the multilayer switch meant it was possible for a switch not only to forward packets based on IP addresses, but to participate in dynamic routing protocols just like a router. Second, carriers began migrating away from legacy long-haul circuit technologies in favor of Ethernet for its speed and lower cost. In fact it’s fairly common for routers today to consist entirely of Ethernet interfaces, just like their switch counterparts.

So where do we draw the line? Is there even a line anymore? The practical distinction between router and switch boils down to a few key functions:

  • Port density. Enterprise-level switches typically come in 24- and 48-port variants, either as standalone devices or as modular chassis. Some are designed as separate physical chassis which can be stacked via a flexible external backplane connection. The goal is to fit as many physical interfaces in as dense a space as possible. A router, by contrast, might have have far fewer individual interfaces split across several field-replaceable modules.
  • Speed. Switches are built primarily for speed, which is a function of the hardware chipset sitting behind the ports. It is common for even modest access switches today to support non-blocking line-rate connectivity.
  • Intelligence. This is the key reason you might need a router instead of a switch. A router serves as a point of intelligent policy enforcement. This includes functions like network address translation, deep packet inspection (looking beyond the outer protocol headers), stateful firewalling, encryption, and similar more involved operations not supported on a multilayer switch.

That’s the current theory regarding purpose-built hardware, anyway. With the current push toward virtual appliances, commodity hardware is being re-purposed for a variety of roles.

What’s the difference between forwarding and control planes?

This is a source of much confusion for people new to networking. Simply put, the forwarding plane handles moving a packet from point A to point B. The control plane handles functions which determine how the forwarding plane operates.

Let’s say you’ve got a router running OSPF. It exchanges routes with neighboring OSPF routers and builds a table of all the routes on the network. Once the routing table has been built, the router installs the best route for each known destination into its forwarding table. This is a control plane function.

When the router receives an IP packet on an interface, it looks up the destination address of that packet in its forwarding table to determine out which interface the packet should be sent. The packet is then moved in memory to the output buffer of that interface and transmitted onto the wire. This is a forwarding plane function.

See the difference? The forwarding plane handles the reception and transmission of packets, whereas the control plane governs how forwarding decisions are made. Forwarding plane operations are typically done “in hardware,” which is to say they are performed by specialized chipsets requiring little interaction with the device’s general-purpose CPU. Control plane functions, on the other hand, are handled in software by CPU and memory very similar to what’s in your personal computer. This is because control protocols perform very complex functions which don’t usually need to occur in real-time. For example, it’s usually not a big deal if there’s a delay of several milliseconds before installing a new route in the forwarding table. Such a delay could be devastating for the performance of the forwarding plane, however.

What’s the difference between MTU and MSS?

The maximum transmission unit (MTU) of a network protocol dictates the maximum amount of data that can be carried by a single packet. Usually when we talk about MTU we’re referring to Ethernet (although other protocols have their own MTUs). The default Ethernet MTU for most platforms is 1500 bytes. This means that a host can transmit a frame carrying up to 1500 bytes of payload data, which does not include the 14-byte Ethernet header (or 18 bytes if tagged with IEEE 802.1Q) or 4-byte trailer, resulting in a total frame size of 1518 bytes (or 1522 bytes with IEEE 802.1Q). Many network devices support jumbo frames by way of increasing the default MTU as high as 9216 bytes, but this is administratively configurable.

Maximum segment size (MSS) is a measure specific to TCP. It indicates the maximum TCP payload of a packet; essentially it is the MTU for TCP. The TCP MSS is calculated by an operating system based on the Ethernet MTU (or other lower layer protocol MTU) of an interface. Because TCP segments must fit within Ethernet frames, the MSS should always be less than the frame MTU. Ideally, the MSS should be as large as possible after accounting for the IP and TCP headers.

Assuming an Ethernet MTU of 1500 bytes, we can subtract the IPv4 header (20 bytes) and TCP header (another 20 bytes) to arrive at an MSS of 1460 bytes. IPv6, with its longer 40-byte header, would allow an MSS of up to 1440 bytes.

TCP MSS is negotiated once at the initiation of a session. Each host includes its MSS value as a TCP option with its first packet (the one with the SYN flag set), and both hosts select the lower of the two MSS values as the session MSS. Once selected, the MSS does not change for the life of the session.

What’s the difference between a VLAN interface and a BVI?

A VLAN interface, also referred to as a switch virtual interface (SVI) or routed VLAN interface (RVI) is a virtual interface created on a multilayer switch to serve as the routed interface for a VLAN, often to provide a default gateway out of the local subnet. VLAN interfaces typically operate and are configured the same as physical routed interfaces: They can be assigned IP addresses, participate in VRRP, have ACLs applied, and so on. You can think of it as a physical interface inside the switch that’s assigned to a VLAN just like one of the physical ports on the outside of the switch.

A bridge group virtual interface (BVI) serves a similar function, but exists on a router where there is no concept of a VLAN (because all its ports normally function at layer three) instead of a switch. A bridge group is a set of two or more physical interfaces operating at layer two, with all member interfaces sharing a common broadcast domain. The BVI is tied to a bridge group to serve as a single virtual layer three interface for all segments connected to the bridge group. When a router has interfaces operating at both layers two and three it is referred to as integrated routing and bridging (IRB).

While VLAN interfaces are a necessity of multilayer switching, IRB is typically used only in niche designs which call for a layer two domain to span multiple router interfaces, such ason a wireless access point.

How do tunnel interfaces work?

A lot of people struggle to understand the concept behind tunnel interfaces. Remember that a tunnel is just the effect of encapsulating one packet inside another as it passes between two points. Tunnel interfaces are used to achieve this encapsulation for route-based VPNs, which can provide a layer of security or abstraction from the underlying network topology. There are a number of encapsulation methods available, including IPsec, GRE, or just plain IP-in-IP.

Although tunnel interfaces are virtual in nature, they behave just like any other interface when it comes to routing decisions. When a packet is routed “out” a tunnel interface, it is encapsulated and a second routing decision is made based on the new (outer) header. This new packet is then forwarded across the wire to another device. Eventually, the packet reaches the far tunnel endpoint, where its outer header is stripped away. A routing decision is made on the original inner packet, which can be forwarded on to its destination in its original form.

For more detail on the whole process, check out Visualizing Tunnels.

What do NAT terms like “inside local” mean?

An IP address within the context of NAT can be considered one of these four classes:

  • Inside global
  • Inside local
  • Outside local
  • Outside global

Unfortunately, these terms are rarely explained in documentation to the satisfaction of the reader. Each term describes two separate attributes of the address: location andperspective. Location is described by the first word in the tuple, either inside or outside. It refers to the “side” of the NAT boundary router in which the address logically exists. In a typical NAT deployment, inside addresses will usually (but not necessarily) be private RFC 1918 addresses, and outside addresses will usually be globally routable (public) IP addresses.

Perspective refers to the side of the NAT boundary from which the address is observed: local or global. If an address is seen by an inside host, it is being observed locally. If an address is seen by an outside host, it is observed globally.

NAT_classes_chart.png

If the terms are still foggy, check out this article on NAT types for a simple example.

Can I use the network and broadcast addresses in a NAT pool?

Yes! Many people assume that because the network and broadcast addresses of a subnet are unusable for host addressing, they cannot be used in a NAT pool. However, a NAT pool has no concept of a subnet mask: This is why you can define NAT pools using arbitrary ranges that don’t conform to binary boundaries (for example, 192.168.0.10 through 192.168.0.20). This includes the IPs which would be designated as the network or broadcast address of a “real” subnet.

Why do we need IP addresses? Can’t we just use MAC addresses for everything?

When you first learned that MAC addresses were intended to be globally unique, you might have wondered why we don’t just use them for addressing traffic end-to-end and skip IP altogether. There are a few very good reasons the Internet evolved to use IP addresses. The first is that not all networks have MAC addresses: The MAC address is unique to the IEEE 802 family of networks. This can be easy to forget on modern networks where nearly everything is Ethernet or some variation on it (like IEEE 802.11 wireless), but this was a much more prominent concern several decades ago when networks were a mishmash of Token Ring, Ethernet, Frame Relay, ATM, and other protocols long since abandoned.

Another reason for IP addresses is that they’re portable. A MAC address is burned into a network adapter and stuck there for life, whereas IP addresses can be changed by an administrator arbitrarily or even assigned dynamically. (Yes, it’s usually possible to reconfigure a NIC to use a MAC different from its burned-in address, but this was not intended use upon Ethernet’s inception.)

However, the most important justification behind IP addresses is that they are aggregatable. That is, a collection of endpoints sharing a common segment can be summarized into a single route. This is not usually possible with MAC addresses, which are assigned pseudo-randomly. Using MAC addresses for end-to-end communication would require every router on the Internet to know the address of every single host on the Internet. This approach obviously would not scale well.

Does QoS provide more bandwidth?

A common misunderstanding concerning Quality of Service (QoS) controls is that they somehow allow you to squeeze more packets through a link. This is not the case. If you have, for instance, an Internet circuit rate-limited to 10 Mbps, you’re never going to be able to send more than 10 Mbps (and probably not quite that much) at a time. The function of QoS is to prefer some classes of traffic over others so that during periods of congestion (when you’re attempting to send more than 10 Mbps across that link), less-important traffic is dropped in favor of passing traffic with higher preference.

QoS controls are usually employed to protect real-time traffic like voice and video conferencing from traffic which is much more tolerant to loss and delay like web, email, and file transfers. They might also be used to prevent large data transfers like server backups from consuming all the throughput available on a network.

Consider a scenario where you have a branch office connected via bonded T1 circuits with an aggregate throughput of 3 Mbps. This link carries both voice and data traffic. If the link becomes congested and both types of traffic suffer, you can implement QoS controls to guarantee a certain portion of the available throughput to voice traffic. Data traffic will be permitted only the portion of throughput not consumed by voice traffic. However, if data traffic is slowed to the point where users complain, QoS can’t help any more. You’ll need to upgrade the circuit (or add a new one) to provide additional throughput.

Introduction To Linux (Starters Guide)



WHY USE LINUX?


The first question is – what are the benefits of using Linux instead of Windows? This is in fact a constant debate between the Windows and Linux communities and while we won’t be taking either side, you’ll discover that our points will favour the Linux operating system because they are valid :)

AND THE REASONS FOR USING LINUX ….

While we could list a billion technical reasons, we will focus on those that we believe will affect you most:

•  Linux is free. That’s right – if you never knew it, the Linux operating system is free of charge. No user or server licenses are required*! If, however, you walk into an IT shop or bookstore, you will find various Linux distributions on the shelf available for purchase, that cost is purely to cover the packaging and possible support available for the distribution.

* We must note that the newer ‘Advanced Linux Servers’, now available from companies such as Redhat, actually charge a license fee because of the support and update services they provide for the operating system. In our opinion, these services are rightly charged since they are aimed at businesses that will use their operating system in critical environments where downtime and immediate support is non-negotiable.

•Linux is developed by hundreds of thousands of people worldwide. Because of this community development mode there are very fresh ideas going into the operating system and many more people to find glitches and bugs in the software than any commercial company could ever afford (yes, Microsoft included).

•Linux is rock solid and stable, unlike Windows, where just after you’ve typed a huge document it suddenly crashes, making you loose all your work!

Runtime errors and crashes are quite rare on the Linux operating system due to the way its kernel is designed and the way processes are allowed to access it. No one can guarantee that your Linux desktop or server will not crash at all, because that would be a bit extreme, however, we can say that it happens a lot less frequently in comparison with other operating systems such as Windows.

For the fanatics of the ‘blue screen of death’ – you’ll be disappointed to find out there is no such thing in the world of Linux. However, not all is lost as there have been some really good ‘blue screen of death’ screen savers out for the Linux graphical X Windows system.

You could also say that evidence of the operating system’s stability is the fact that it’s the most widely used operating system for running important services in public or private sectors. Worldwide statistics show that the number of Linux web servers outweigh by far all other competitors:

linux-introduction-why-use-linux-1

Today, netcraft reports that for the month of June 2005, out of a total of 64,808,485 Web servers, 45,172,895 are powered by Apache while only 13,131,043 use Microsoft’s IIS Web server!

•Linux is much more secure than Windows, there are almost no viruses for Linux and, because there are so many people working on Linux, whenever a bug is found, a fix is provided much more quickly than with Windows. Linux is much more difficult for hackers to break into as it has been designed from the ground up with security in mind.

•Linux uses less system resources than Windows. You don’t need the latest, fastest computer to run Linux. In fact you can run a functional version of Linux from a floppy disk with a computer that is 5-6 years old! At this point, we can also mention that one of our lab firewalls still runs on a K6-266 -3DNow! processor with 512 MB Ram! Of course – no graphical interfaces are loaded as we only work on in CLI mode!

•Linux has been designed to put power into the hands of the user so that you have total control of the operating system and not the other way around. A person who knows how to use Linux has the computer far more ‘by the horns’ than any Windows user ever has.

•Linux is fully compatible with all other systems. Unlike Microsoft Windows, which is at its happiest when talking to other Microsoft products, Linux is not ‘owned’ by any company and thus it keeps its compatibility with all other systems. The simplest example of this is that a Windows computer cannot read files from a hard-disk with the Linux file system on it (ext2 & ext3), but Linux will happily read files from a hard-disk with the Windows file system (fat, fat32 or ntfs file system), or for that matter any other operating system.

Now that we’ve covered some of the benefits of using Linux, let’s start actually focusing on the best way to ease your migration from the Microsoft world to the Linux world, or in case you already have a Linux server running – start unleashing its full potential!

The first thing we will go over is the way Linux deals with files and folders on the hard-disk as this is completely different to the way things are done in Windows and is usually one of the challenges faced by Linux newbies


THE LINUX FILE SYSTEM


A file system is nothing more than the way the computer stores and retrieves all your files. These files include your documents, programs, help files, games, music etc. In the Windows world we have the concept of files and folders.

A folder (also known as a directory) is nothing more than a container for different files so that you can organise them better. In Linux, the same concept holds true — you have files, and you have folders in which you organise these files.

The difference is that Windows stores files in folders according to the program they belong to (in most cases), in other words, if you install a program in Windows, all associated files — such as the .exe file that you run, the help files, configuration files, data files etc. go into the same folder. So if you install for example Winzip, all the files relating to it will go into one folder, usually c:\Program Files\Winzip.

In Linux however, files are stored based on the function they perform. In other words, all help files for all programs will go into one folder made just for help files, all the executable (.exe) files will go into one folder for executable programs, all programs configuration files will go into a folder meant for configuration files.

This layout has a few significant advantages as you always know where to look for a particular file. For example, if you want to find the configuration file for a program, you’ll bound to find it in the actual program’s installation directory.

With the Windows operating system, it’s highly likely the configuration file will be placed in the installation directory or some other Windows system subfolder. In addition, registry entries is something you won’t be able to keep track of without the aid of a registry tracking program – something that does not exist in the Linux world since there is no registry!

Of course in Linux everything is configurable to the smallest level, so if you choose to install a program and store all its files in one folder, you can, but you will just complicate your own life and miss out on the benefits of a file system that groups files by the function they perform rather than arbitrarily.

Linux uses an hierarchical file system, in other words there is no concept of ‘drives’ like c: or d:, everything starts from what is called the ‘/’ directory (known as the root directory). This is the top most level of the file system and all folders are placed at some level from here. This is how it looks:

linux-introduction-file-system-1

As a result of files being stored according to their function on any Linux system, you will see many of the same folders.

These are ‘standard’ folders that have been pre-designated for a particular purpose. For example the ‘bin’ directory will store all executable programs (the equivalent of Windows ‘.exe ‘ files).

Remember also that in Windows you access directories using a backslash (eg c:\Program Files) whereas in Linux you use a forward slash (eg: /bin ).

In other words you are telling the system where the directory is in relation to the root or top level folder.

So to access the cdrom directory according to the diagram on the left you would use the path /mnt/cdrom.

To access the home directory of user ‘sahir’ you would use /home/sahir.

So it’s now time to read a bit about each directory function to help us get a better understanding of the operating system:

• bin – This directory is used to store the system’s executable files. Most users are able to access this directory as it does not usually contain system critical files.

• etc – This folder stores the configuration files for the majority of services and programs run on the machine. These configuration files are all plain text files that you can open and edit the configuration of a program instantly. Network services such as samba (Windows networking), dhcp, http (apache web server) and many more, rely on this directory! You should be careful with any changes you make here.

• home – This is the directory in which every user on the system has his own personal folder for his own personal files. Think of it as similar to the ‘My Documents’ folder in Windows. We’ve created one user on our test system by the name of ‘sahir’ – When Sahir logs into the system, he’ll have full access to his home directory.

• var – This directory is for any file whose contents change regularly, such as system log files – these are stored in /var/log. Temporary files that are created are stored in the directory /var/tmp.

• usr – This is used to store any files that are common to all users on the system. For example, if you have a collection of programs you want all users to access, you can put them in the directory /usr/bin. If you have a lot of wallpapers you want to share, they can go in /usr/wallpaper. You can create directories as you like.

• root – This can be confusing as we have a top level directory ‘/’ which is also called ‘the root folder’.

The ‘root’ (/root) directory is like the ‘My Documents’ folder for a very special user on the system – the system’s Administrator, equivalent to Windows ‘Administrator’ user account.

This account has access to any file on the system and can change any setting freely. Thus it is a very powerful account and should be used carefully. As a good practice, even if you are the system Administrator, you should not log in using the root account unless you have to make some configuration changes.

It is a better idea to create a ‘normal’ user account for your day-to-day tasks since the ‘root’ account is the account for which hackers always try to get the password on Linux systems because it gives them unlimited powers on the system. You can tell if you are logged in as the root account because your command prompt will have a hash ‘#’ symbol in front, while other users normally have a dollar ‘$‘ symbol.

• mnt – We already told you that there are no concepts of ‘drives’ in Linux. So where do your other hard-disks (if you have any) as well as floppy and cdrom drives show up?

Well, they have to be ‘mounted’ or loaded for the system to see them. This directory is a good place to store all the ‘mounted’ devices. Taking a quick look at our diagram above, you can see we have mounted a cdrom device so it is showing in the /mnt directory. You can access the files on the cdrom by just going to this directory!

• dev – Every system has its devices, and the Linux O/S is no exeption to this! All your systems devices such as com ports, parallel ports and other devices all exist in /dev directory as files and directories! You’ll hardly be required to deal with this directory, however you should be aware of what it contains.

• proc – Think of the /proc directory as a deluxe version of the Windows Task Manager. The /proc directoy holds all the information about your system’s processes and resources. Here again, everything exists as a file and directory, something that should’t surprise you by now!

By examining the appropriate files, you can see how much memory is being used, how many tcp/ip sessions are active on your system, get information about your CPU usage and much more. All programs displaying information about your system use this directory as their source of information!

• sbin – The /sbin directory’s role is that similar to the /bin directory we covered earlier, but with the difference its only accessible by the ‘root’ user. Reason for this restriction as you might have already guessed are the sensitive applications it holds, which generally are used for the system’s configuration and various other important services. Consider it an equivelant to the Windows Administration tools folder and you’ll get the idea.

Lastly, if you’ve used a Linux system, you’ll have noticed that not many files have an extension – that is, the three letters after the dot, as found in Windows and DOS: file1.txt , winword.exe , letter.doc.

While you can name your files with extensions, Linux doesn’t really care about the ‘type’ of file. There are very quick ways to instantly check the type of file anything is. You can even make just about any file in Linux an executable or .exe file at whim!

Linux is smart enough to recognise the purpose of a file so you don’t need to remember the meaning of different extensions.

You have now covered the biggest hurdle faced by new Linux users. Once you get used to the file system you’ll find it is a very well organised system that makes storing files a very logical process. There is a system and, as long as you follow it, you’ll find most of your tasks are much simpler than other operating system tasks.


THE LINUX COMMAND LINE


You could actually skip this whole section for those who are already familiar with the topic, but we highly recommend you read it because this is the heart of Linux. We also advise you to go through this section while sitting in front of the computer.

Most readers will be familiar with DOS in Windows and opening a DOS box. Well, let’s put it this way.. comparing the power of the Linux command line with the power of the DOS prompt is like comparing a Ferrari with a bicycle!

People may tell you that the Linux command line is difficult and full of commands to remember, but it’s the same thing in DOS and just remember – you can get by in Linux without ever opening a command line (just like you can do all your work in Windows without ever opening a DOS box !). However, the Linux command line is actually very easy, logical and once you have even the slightest ability and fluency with it, you’ll be amazed as to how much faster you can do complicated tasks than you would be able to with the fancy point-and-click graphics and mouse interface.

To give you an example, imagine the number of steps it would take you in Windows to find a file that has the word “hello” at the end of a line, open that file, remove the first ten lines, sort all the other lines alphabetically and then print it. In Linux, you could achieve this with a single command! – Have we got your attention yet ?!

Though you might wonder what you could achieve by doing this – the point is that you can do incredibly complicated things by putting together small commands, exactly like using small building blocks to make a big structure.

We’ll show you a few basic commands to move around the command line as well as their equivalents in Windows. We will first show you the commands in their basic form and then show you how you can see all the options to make them work in different ways.

THE BASIC COMMANDS

As a rule, note that anything typed in ‘single quotes and italics‘ is a valid Linux command to be typed at the command line, followed by Enter.

We will use this rule throughout all our tutorials to avoid confusion and mistakes. Do not type the quotes and remember that, unlike Windows, Linux is case sensitive, thus typing ‘Document’ is different from typing ‘document’.

•  ls – You must have used the ‘dir’ command on Windows… well this is like ‘dir’ command on steroids! If you type ‘ls‘ and press enter you will see the files in that directory, there are many useful options to change the output. For example, ‘ls -l‘ will display the files along with details such as permissions (who can access a file), the owner of the file(s), date & time of creation, etc. The ‘ls‘ command is probably the one command you will use more than any other on Linux. In fact, on most Linux systems you can just type ‘dir‘ and get away with it, but you will miss out on the powerful options of the ‘ls‘ command.

linux-introduction-cmd-line-1

•  cd – This is the same as the DOS command: it changes the directory you are working in. Suppose you are in the ‘/var/cache’ directory and want to go to its subfolder ‘samba’ , you can type ‘cd samba‘ just as you would if it were a DOS system.

linux-introduction-cmd-line-2

Imagine you were at the ‘/var/cache’ directory and you wanted to change to the ‘/etc/init.d’ directory in one step, you could just type ‘cd /etc/init.d‘ as shown above. On the other hand, if you just type ‘cd‘ and press enter, it will automatically take you back to your personal home directory (this is very useful as all your files are usually stored there).

We also should point out that while Windows and DOS use the well known back-slash ‘ \ ‘ in the full path address, Linux differentiates by using the forward-slash ‘ / ‘. This explains why we use the command ‘cd /etc/init.d‘ and notcd \etc\init.d‘ as most Windows users would expect.

•  pwd – This will show you the directory you are currently in, should you forget. It’s almost like asking the operating system ‘Where am I right now ?’. It will show you the ‘present working directory’.

linux-introduction-cmd-line-3

•  cp – This is the equivalent of the Windows ‘copy’ command. You use it to copy a file from one place to another. So if you want to copy a file called ‘document’ to another file called ‘document1’ , you would need to type ‘cp document document1‘. In other words, first the source, then the destination.

linux-introduction-cmd-line-4

The ‘cp’ command will also allow you to provide the path to copy it to. For example, if you wanted to copy ‘document’ to the home directory of user1, you would then type ‘cp document /home/user1/‘. If you want to copy something to your home directory, you don’t need to type the full path (example /home/yourusername), you can use the shortcut ‘~’ (tilda), so to copy ‘document’ to your home directory, you can simply type ‘copy document ~‘ .

•  rm – This is the same as the ‘del’ or ‘delete’ command in Windows. It will delete the files you input. So if you need to delete a file named ‘document’, you type ‘rm document’. The system will ask if you are sure, so you get a second chance! If you typed ‘rm –f then you will force (-f) the system to execute the command without requiring confirmation, this is useful when you have to delete a large number of files.

linux-introduction-cmd-line-5

In all Linux commands you can use the ‘*’ wildcard that you use in Windows, so to delete all files ending with .txt in Windows you would type ‘del *.txt‘ whereas in Linux you would type ‘rm -f *.txt. Remember, we used the ‘-f‘ flag because we don’t want to be asked to confirm the deletion of each file.

linux-introduction-cmd-line-6

To delete a folder, you have to give rm the ‘-r‘ (recursive) option; as you might have already guessed, you can combine options like this: ‘rm -rf mydirectory‘. This will delete the directory ‘mydirectory’ (and any subdirectories within it) and will not ask you twice. Combining options like this works for all Linux commands.

•mkdir / rmdir – These two commands are the equivalent of Windows’ ‘md’ and ‘rd’, which allow you to create (md) or remove (rd) a directory. So if you type ‘mkdir firewall‘, a directory will be created named ‘firewall’. On the other hand, type ‘rmdir firewall‘ and the newly created directory will be deleted. We should also note that the ‘rmdir‘ command will only remove an empty directory, so you might be better off using ‘rm -rf‘ as described above.

linux-introduction-cmd-line-7

•mv – This is the same as the ‘move’ command on Windows. It works like the ‘cp‘ or copy command, except that after the file is copied, the original source file is deleted. By the way, there is no rename command on Linux because technically moving and renaming a file is the same thing!

In this example, we recreated the ‘firewall‘ directory we deleted previously and then tried renaming it to ‘firewall-cx‘. Lastly, the new directory was moved to the ‘/var’ directory:

linux-introduction-cmd-line-8

That should be enough to let you move around the command line or the ‘shell’, as it’s known in the Linux community. You’ll be pleased to know that there are many ways to open a shell window from the ‘X’ graphical desktop, which can be called an xterm, or a terminal window.

•  cat / more / less – These commands are used to view files containing text or code. Each command will allow you to perform a special function that is not available with the others so, depending on your work, some might be used more frequently than others.

The ‘cat‘ command will show you the contents of any file you select. This command is usually used in conjunction with other advanced commands such as ‘grep‘ to look for a specific string inside a large file which we’ll be looking at later on.

When issued, the ‘cat’ command will run through the file without pausing until it reaches the end, just like a file scanner that examines the contents of a file while at the same time showing the output on your screen:

linux-introduction-cmd-line-9

In this example, we have a whopper 215kb text file containing the system’s messages. We issued the ‘cat messages‘ command and the file’s content is immediately listed on our screen, only this went on for a minute until the ‘cat’ command reached the end of the file and then exited.

Not much use for this example, but keep in mind that we usually pipe the output to other commands in order to give us some usable results :)

more‘ is used in a similar way, but will pause the screen when it has filled with text, in which case we need to hit the space bar or enter key to continue scrolling per page or line. The ‘up’ or ‘down’ arrow keys are of no use for this command and will not allow you to scroll through the file – it’s pretty much a one way scrolling direction (from the beginning to the end) with the choice of scrolling per page (space bar) or line (enter key).

The ‘less‘ command is an enhanced version of ‘more‘, and certainly more useful. With the less command, you are able to scroll up or down a file’s content. To scroll down per page, you can make use of the space bar, or CTRL-D. To scroll upwards towards the beginning of the file, use CTRL-U.

It is not possible for us to cover all the commands and their options because there are thousands! However, we will teach you the secret to using Linux — that is, how to find the right tool (command) for a job, and how to find help on how to use it.

CAN I HAVE SOME HELP PLEASE?

To find help on a command, you type the command name followed by ‘–help. For example, to get help on the ‘mkdir‘ command, you will type ‘mkdir –help. But there is a much more powerful way…

For those who read our previous section, remember we told you that Linux stores all files according to their function? Well Linux stores the manuals (help files) for every program installed, and the best part is that you can look up the ‘man pages’ (manuals) very easily. All the manuals are in the same format and show you every possible option for a command.

To open the manual of a particular command, type ‘man‘ followed by the command name, so to open the manual for ‘mkdir’ type ‘man mkdir‘:

linux-introduction-cmd-line-10

Interestingly, try getting help on the ‘man’ command itself by typing ‘man man. This is the most authoritative and comprehensive source of help for anything you have in Linux, and the best part is that every program will come with its manual! Isn’t this so much better than trying to find a help file or readme.txt file :) ?

Here’s another incredibly useful command — if you know the task you want to perform, but don’t know the command or program to use, use the ‘apropos‘ command. This command will list all the programs on the system that are related to the task you want to perform. For example, say you want to send email but don’t know the email program, you can type ‘apropos email‘ and receive a list of all the commands and programs on the system that will handle email! There is no equivalent of this on Windows.

WHERE IS THAT FILE?

Another basic function of any operating system is knowing how to find or search for a missing or forgotten file, and if you have already asked yourself this question, you’ll be pleased to find out the answer :)

The simplest way to find any file in Linux is to type ‘locate‘ followed by the filename. So if you want to find a file called ‘document’ , you type ‘locate document‘. The locate command works using a database that is usually built when you are not using your Linux system, indexing all your files and directories to help you locate them.

You can use the more powerful ‘find‘ command, but I would suggest you look at its ‘man’ page first by typing ‘man find‘. The ‘find‘ command differs from the ‘locate‘ command in that it does not use a database, but actually looks for the file(s) requested by scanning the whole directory or file system depending on where you execute the command.

Logically, the ‘locate‘ command is much faster when looking for a file that has already been indexed in its database, but will fail to discover any new files that have just been installed since they haven’t been indexed! This is where the ‘find‘ command comes to the rescue!


INSTALLING SOFTWARE ON LINUX


Installing software in Linux is very different from Windows for one very simple reason: most Linux programs come in ‘source code’ form. This allows you to modify any program (if you’re a programmer) to suit your purposes! While this is incredibly powerful for a programmer, for most of us who are not- we just want to start using the program!

SHOW ME THE WAY MASTER….

Most programs will come ‘zipped’ just like they do in Windows, in other words they pack all the files together into one file and compress it to a more manageable size. Depending on the zipping program used, the method of unzipping may vary, however, each program will have step by step instructions on how to unpack it.

Most of the time the ‘tar’ program will be used to unpack a package and unzipping the program is fairly straightforward. This is initiated by typing ‘tar -zxvf file-to-unzip.tgz‘ where ‘file-to-unzip.tgz’ is the actual filename you wish to unzip. We will explain the four popular options we’ve used (zxvf) but you can read the ‘tar man‘ page if you are stuck or need more information.

As mentioned, the ‘tar‘ program is used to unpack a package we’ve downloaded and would like to install. Because most packages use ‘tar’ to create one file for easy downloads, gzip (Linux’s equivalent to the Winzip program) is used to compress the tar file (.gz), reducing the size and making it easier to transfer. This also explains the reason most files have extensions such as ‘.tgz’ or ‘.tar.gz’.

To make life easy, instead of giving two commands to decompress (unzip) and unpack the package, we provide tar with the -z option to automatically unzip to package and then proceed with unpacking it (-x). Here are the options in greater detail:

-z : Unzip tar package before unpacking it.

-x : Extract/Unpack the package

-v : Verbosely list files processed

-f : use archive file (filename provided)

linux-introduction-installing-software-1

Because the list of files was long, we’ve cut the bottom part to make it fit in our small window.

Once you have unzipped the program, go into its directory and look for a file called INSTALL, most programs will come with this file. It contains detailed instructions on how to install it, including the necessary commands to be typed, depending on the Linux distribution you have. After you’ve got that out of the way, you’re ready to use the three magic commands that install 99% of all software in Linux :)

Open the program directory and type ./configure. [1st magic command]

linux-introduction-installing-software-2

You’ll see a whole lot of output that you may not understand; this is when the software you’re installing is automatically checking your system to analyze the options that will work best. Unlike the Windows world, where programs are made to work on a very general computer, Linux programs automatically customize themselves to fit your system.

Think of it as the difference between buying ready-made clothes and having tailor made clothes especially designed for you. This is one of the most important reasons why programs are in the ‘source code’ form in Linux.

In some cases, the ./configure command will not succeed and will produce errors that will not allow you to take the step and compile your program. In these cases, you must read the errors, fix any missing library files (most common causes) or problems and try again:

linux-introduction-installing-software-3

As you can see, we’ve run into a few problems while trying to configure this program on our lab machine, so we looked for a different program that would work for the purpose of this demonstration!

linux-introduction-installing-software-4

This ./configure finished without any errors, so the next step is to type make. [2nd magic command]

linux-introduction-installing-software-5

This simple command will magically convert the source code into a useable program… the best analogy of this process is that in the source code are all the ingredients in a recipe, if you understand programming, you can change the ingredients to make the dish better. Typing the make command takes the ingredients and cooks the whole meal for you! This process is known as ‘compiling’ the program

If make finishes successfully, you will want to put all the files into the right directories, for example, all the help files in the help files directory, all the configuration files in the /etc directory (covered in the pages that follow).

To perform this step, you have to log in as the superuser or ‘root’ account, if you don’t know this password you can’t do this.

Assuming you are logged in as root, type make install. [3rd magic command]

linux-introduction-installing-software-6

Lastly, once our program has been configured, compiled and installed in /usr/local/bin with the name of ‘bwn-ng’, we are left with a whole bunch of extra files that are no longer useful, these can be cleaned using the make clean command – but this, as you might have guessed, is not considered a magic command :)

linux-introduction-installing-software-7

There, that’s it!

Now here’s the good news… that was the old hard way!

All the people involved with Linux realised that most people don’t need to read the source code and change the program and don’t want to compile programs, so they have a new way of distributing programs in what is known as ‘rpm’ (red hat package manager) format.

This is one single file of a pre-compiled program, you just have to double click the rpm file (in the Linux graphical interface – X) and it will install it on your system for you!

In the event that you find a program that is not compiling with ‘make‘ you can search on the net (we recommend www.pbone.net ) for an rpm based on your Linux distribution and version. Installation then is simply one click away for the graphical X desktop, or one command away for the hardcore Linux enthusiasts!

Because the ‘rpm’ utility is quite complex with a lot of flags and options, we would highly recommend you read its ‘man’ page before attempting to use it to install a program.

One last note about rpm is that it will also check to see if there are any dependent programs or files that should or shouldn’t be touched during an install or uninstall. By doing so, it is effectively protecting your operating system from accidentally overwriting or deleting a critical system file, causing a lot of problems later on!


ADVANCED LINUX COMMANDS


Now that you’re done learning some of the Basic Linux commands and how to use them to install Linux Software, it’s time we showed you some of the other ways to work with Linux. Bear in mind that each distribution of Linux (Redhat, SUSE, Mandrake etc) will come with a slightly different GUI (Graphical User Interface) and some of them have done a really good job of creating GUI configuration tools so that you never need to type commands at the command line.

•

VI EDITOR

For example, if you want to edit a text file you can easily use one of the powerful GUI tools like Kate, Kwrite etc., which are all like notepad in Windows though much more powerful; they have features such as multiple file editing and syntax highlighting (if you open an HTML file it understands the HTML tags and highlights them for you). However, you can also use the very powerful vi editor.

When first confronted by vi most users are totally lost, you open a file in vi (e.g vi document1) and try to type, but nothing seems to happen.. the system just keeps beeping!

linux-introduction-avd-cmd-line-1

Well that’s because vi functions in two modes, one is the command mode, where you can give vi commands such as open a file, exit, split the view, search and replace etc., and the other mode is the insert view where you actually type text!

Don’t be put off by the fact that vi doesn’t have a pretty GUI interface to go with it, this is an incredibly powerful text editor that would be well worth your time learning… once you’re done with it you’ll never want to use anything else!
Realising that most people would find vi hard to use straight off, there is a useful little walk-through tutorial that you can access by typingvimtutor at a command line. The tutorial opens vi with the tutorial in it, and you try out each of the commands and shortcuts in vi itself. It’s very easy and makes navigating around vi a snap. Check it out.

linux-introduction-avd-cmd-line-2

•GREP

Another very useful Linux command is the grep command. This little baby searches for a string in any file. The grep command is frequently used in combination with other commands in order to search for a specific string. For example, if we wanted to check our web server’s log file for a specific URL query or IP address, the ‘grep’ command would do this job just fine.

If, on the other hand, you want to find every occurence of ‘hello world’ in every .txt file you have, you would type grep “hello world” *.txt

You’ll see some very common command structures later on that utilise ‘grep’. At the same time, you can go ahead and check grep’s man page by typing man grep , it has a whole lot of very powerful options.

linux-introduction-avd-cmd-line-3

•

PS – PROCESS ID (PID) DISPLAY

The ps command will show all the tasks you are currently running on the system, it’s the equivalent of Windows Task Manager and you’ll be happy to know that there are also GUI versions of ‘ps’.

If you’re logged in as root in your Linux system and type ps -aux , you’ll see all processes running on the system by every user, however, for security purposes, users will only be able to see processes owned by them when typing the same command.

linux-introduction-avd-cmd-line-4

Again, man ps will provide you with a bundle of options available by the command.

•KILL

The ‘kill’ command is complementary to the ‘ps’ command as it will allow you to terminate a process revealed with the previous command. In cases where a process is not responding, you would use the following syntax to effectively kill it: kill -9 pid where ‘pid’ is the Process ID (PID) that ‘ps’ displays for each task.

linux-introduction-avd-cmd-line-5

In the above example, we ran a utility called ‘bandwidth’ twice which is shown as two different process IDs (7171 & 13344) using the pscommand. We then attempted to kill one of them using the command kill -9 7171 . The next time we ran the ‘ps’, the system reported that a process that was started with the ‘./bandwidth’ command had been previously killed.

Another useful flag we can use with the ‘kill’ command is the -HUP. This neat flag won’t kill the process but pause it and at the same time force it to reload its configuration. So, if you’ve got a service running and need to restart it because of changes made in its configuration file, then the -HUP flag will do just fine. Many people look at it as an alternative ‘reload’ command.

The complete syntax to make use of the flag is: kill -HUP pid where ‘pid’ is the process ID number you can obtain using the ‘ps’ command, just as we saw in the previous examples.

 

CHAINING COMMANDS, REDIRECTING OUTPUT, PIPING

In Linux, you can chain groups of commands together with incredible ease, this is where the true power of the Linux command line exists, you use small tools, each of which does one little task and passes the output on to the next one.

For example, when you run the ps aux command, you might see a whole lot of output that you cannot read in one screen, so you can use the pipe symbol ( | ) to send the output of ‘ps’ to ‘grep’ which will search for a string in that output. This is known as ‘piping’ as it’s similar to plumbing where you use a pipe to connect two things together.

linux-introduction-avd-cmd-line-6

Say you want to find the task ‘antispam’ : you can run ps aux | grep antispam . Ps ‘pipes’ its output to grep and it then searches for the string, showing you only the line with that text.

If you wanted ps to display one page at a time you can pipe the output of ps to either more or less . The advantage of less is that it allows you to scroll upwards as well. Try this: ps aux | less . Now you can use the cursors to scroll through the output, or use pageup, pagedown.

•ALIAS

The ‘alias’ command is very neat, it lets you make a shortcut keyword for another longer command. Say you don’t always want to type ps aux | less, you can create an alias for it.. we’ll call our alias command ‘pl’. So you type  alias pl=’ps aux | less’ .

Now whenever you type pl , it will actually run ps aux | less  – Neat, is’nt it?

linux-introduction-avd-cmd-line-7

You can view the aliases that are currently set by typing alias:

linux-introduction-avd-cmd-line-8

As you can see, there are quite a few aliases already listed for the ‘root’ account we are using. You’ll be suprised to know that most Linux distributions automatically create a number of aliases by default – these are there to make your life as easy as possible and can be deleted anytime you wish.

OUTPUT REDIRECTION

It’s not uncommon to want to redirect the output of a command to a text file for further processing. In the good old DOS operating system, this was achieved by using the ‘>‘ operator. Even today, with the latest Windows operating systems, you would open a DOS command prompt and use the same method!

The good news is that Linux also supports these functions without much difference in the command line.

For example, if we wanted to store the listing of a directory into a file, we would type the following: ls > dirlist.txt:

linux-introduction-avd-cmd-line-9

As you can see, we’ve taken the output of ‘ls’ and redirected it to our file. Let’s now take a look and see what has actually been stored in there by using the command cat dirlist.txt :

linux-introduction-avd-cmd-line-10

As expected, the dirlist.txt file contains the output of our previous command. So you might ask yourself ‘what if I need to append the results?’ – No problem here, as we’ve already got you covered.

When there’s a need for appending files or results, as in DOS we simply use the double >> operator. By using the command it will append the new output to the file we have specified in the command line:

linux-introduction-avd-cmd-line-11

The above example clearly shows the content of our file named ‘document2’ which is then appended to the previously created file ‘dirlist.txt’. With the use of the ‘cat’ command, we are able to examine its contents and make sure the new data has been appended.

Note:

By default, the single > will overwrite the file if it exists, so if you give the ls > dirlist.txt command again, it will overwrite the first dirlist.txt. However, if you specify >> it will add the new output below the previous output in the file. This is known as output redirection.

In Windows and DOS you can only run one command at a time, however, in Linux you can run many commands simultaneously. For example, let’s say we want to see the directory list, then delete all files ending with .txt, then see the directory list again.

This is possible in Linux using one statement as follows : ls -l; rm -f *.txt; ls -l . Basically you separate each command using a semicolon, ‘;‘. Linux then runs all three commands one after the other. This is also known as command chaining.

BACKGROUND PROCESSES

If you affix an ampersand ‘&’ to the end of any command, it will run in the background and not disturb you, there is no equivalent for this in Windows and it is very useful because it lets you start a command in the background and run other tasks while waiting for that to complete.

The only thing you have to keep in mind is that you will not see the output from the command on your screen since it is in the background, but we can redirect the output to a file the way we did two paragraphs above.

For example, if you want to search through all the files in a directory for the word ‘Bombadil’, but you want this task to run in the background and not interrupt you, you can type this: grep “Bombadil” *.* >> results.txt& . Notice that we’ve added the ampersand ‘&’ character to the end of the command, so it will now run in the background and place the results in the file results.txt . When you press enter, you’ll see something like this :

$ grep “Bombadil” *.* >> results.txt&

[1] 1272

linux-introduction-avd-cmd-line-12

Our screen shot confirms this. We created a few new files that contained the string ‘Bombadil’ and then gave the command grep “Bombadil” *.* >> results.txt& . The system accepted our command and placed the process in the background using PID (Process ID) 14976. When we next gave the ‘ls’ command to see the listing of our directory we saw our new file ‘results.txt’ which, as expected, contained the files and lines where our string was found.

If you run a ‘ps‘ while this is executing a very complex command that takes some time to complete, you’ll see the command in the list. Remember that you can use all the modifiers in this section with any combination of Linux commands, that’s what makes it so powerful. You can take lots of simple commands and chain, pipe, redirect them in such a way that they do something complicated!


LINUX FILE & FOLDER PERMISSIONS


File & folder security is a big part of any operating system and Linux is no exception!

These permissions allow you to choose exactly who can access your files and folders, providing an overall enhanced security system. This is one of the major weaknesses in the older Windows operating systems where, by default, all users can see each other’s files (Windows 95, 98, Me).

For the more superior versions of the Windows operating system such as NT, 2000, XP and 2003 things look a lot safer as they fully support file & folder permissions, just as Linux has since the beginning.

Together, we’ll now examine a directory listing from our Linux lab server, to help us understand the information provided. While a simple ‘ls’ will give you the file and directory listing within a given directory, adding the flag ‘-l’ will reveal a number of new fields that we are about to take a look at:

linux-introduction-file-permissions-1

It’s possible that most Linux users have seen similar information regarding their files and folders and therefore should feel pretty comfortable with it. If on the other hand you happen to fall in to the group of people who haven’t seen such information before, then you either work too much in the GUI interface of Linux, or simply haven’t had much experience with the operating system :)

Whatever the case, don’t disappear – it’s easier than you think!!

SO WHAT DOES ALL THIS OUTPUT MEAN ? ESPECIALLY ALL THOSE ‘RWX’ LINES?!

Let’s start from scratch, analysing the information in the previous screenshot.

linux-introduction-file-permissions-2

In the yellow column on the right we have the file & directory names (dirlist.txt, document1, document2 etc.) – nothing new here. Next, in the green column, we will find the time and date of creation.

Note that the date and time column will not always display in the format shown. If the file or directory it refers to was created in a year different from the current one, it will then show only the date, month and year, discarding the time of creation.

For example, if the file ‘dirlist.txt’ was created on the 27th of July, 2004, then the system would show:

Jun 27 2004 dirlist.txt

instead of

Jun 27 11:28 dirlist.txt

A small but important note when examining files and folders! Lastly, the date will change when modifying the file. As such, if we edited a file created last year, then the next time we typed ‘ls -l’, the file’s date information would change to today’s date. This is a way you can check to see if files have been modified or tampered with.

The next column (purple) contains the file size in bytes – again nothing special here.

linux-introduction-file-permissions-3

Next column (orange) shows the permissions. Every file in Linux is ‘owned’ by a particular user.. normally this is the user (owner) who created the file.. but you can always give ownership to someone else.

The owner might belong to a particular group, in which case this file is also associated with the user’s group. In our example, the left column labeled ‘User’ refers to the actual user that owns the file, while the right column labeled ‘group’ refers to the group the file belongs to.

Looking at the file named ‘dirlist.txt’, we can now understand that it belongs to the user named ‘root’ and group named ‘sys’.

Following the permissions is the column with the cyan border in the listing.

The system identifies files by their inode number, which is the unique file system identifier for the file. A directory is actually a listing of inode numbers with their corresponding filenames. Each filename in a directory is a link to a particular inode.

Links let you give a single file more than one name. Therefore, the numbers indicated in the cyan column specifies the number of links to the file.

As it turns out, a directory is actually just a file containing information about link-to-inode associations.

Next up is a very important column, that’s the first one on the left containing the ‘-rwx—-w-‘ characters. These are the actual permissions set for the particular file or directory we are examining.

To make things easier, we’ve split the permissions section into a further 4 columns as shown above. The first column indicates whether we are talking about a directory (d), file (-) or link (l).

In the newer Linux distributions, the system will usually present the directory name in colour, helping it to stand out from the rest of the files. In the case of a file, a dash (-) or the letter ‘f’ is used, while links make the use of the letter ‘l’ (l). For those unfamiliar with links, consider them something similar to the Windows shortcuts.

linux-introduction-file-permissions-4

Column 2 refers to the user rights. This is the owner of the file, directory or link and these three characters determine what the owner can do with it.

The 3 characters on column 2 are the permissions for the owner (user rights) of the file or directory. The next 3 are permissions for thegroup that the file is owned by and the final 3 characters define the access permissions for the others group, that is, everyone else not part of the group.

So, there are 3 possible attributes that make up file access permissions:

r – Read permission. Whether the file may be read. In the case of a directory, this would mean the ability to list the contents of the directory.

w – Write permission. Whether the file may be written to or modified. For a directory, this defines whether you can make any changes to the contents of the directory. If write permission is not set then you will not be able to delete, rename or create a file.

x – Execute permission. Whether the file may be executed. In the case of a directory, this attribute decides whether you have permission to enter, run a search through that directory or execute some program from that directory.

Let’s take a look at another example:

linux-introduction-file-permissions-5

Take the permissions of ‘red-bulb’, which are drwxr-x—. The owner of this directory is user david and the group owner of the directory is sys. The first 3 permission attributes are rwx. These permissions allow full read, write and execute access to the directory to user david. So we conclude that david has full access here.

The group permissions are r-x. Notice there is no write permission given here so while members of the group sys can look at the directory and list its contents, they cannot create new files or sub-directories. They also cannot delete any files or make changes to the directory content in any way.

Lastly, no one else has any access because the access attributes for others are .

If we assume the permissions are drw-r–r– you see that the owner of the directory (david) can list and make changes to its contents (Read and Write access) but, because there is no execute (x) permission, the user is unable to enter it! You must have read and execute(r-x) in order to enter a directory and list its contents. Members of the group sys have a similar problem, where they seem to be able toread (list) the directory’s contents but can’t enter it because there is no execute (x) permission given!

Lastly, everyone else can also read (list) the directory but is unable to enter it because of the absence of the execute (x) permission.

Here are some more examples focusing on the permissions:

-r–r–r– :This means that owner, group and everyone else has only read permissions to the file (remember, if there’s no ‘d‘ or ‘l‘, then we are talking about a file).

-rw-rw-rw- : This means that the owner, group and everyone else has read and write permissions.

-rwxrwxrwx : Here, the owner, group and everyone else has full permissions, so they can all read, write and execute the file (-).

MODIFYING OWNERSHIP & PERMISSIONS

So how do you change permissions or change the owner of a file?

Changing the owner or group owner of a file is very simple, you just type ‘chown user:group filename.ext‘, where ‘user’ and ‘group’ are those to whom you want to give ownership of the file. The ‘group’ parameter is optional, so if you type ‘chown david file.txt‘, you will give ownership of file.txt to the user named david.

In the case of a directory, nothing much changes as the same command is used. However, because directories usually contain files that also need to be assigned to the new user or group, we use the ‘-R‘ flag, which stands for ‘recursive’ – in other words all subdirectories and their files: ‘chown -R user:group dirname‘.

To change permissions you use the ‘chmod’ command. The possible options here are ‘u‘ for the user, ‘g‘ for the group, ‘o‘ for other, and ‘a‘ for all three. If you don’t specify one of these letters it will change to all by default. After this you specify the permissions to add or remove using ‘+‘ or ‘‘ . Let’s take a look at an example to make it easier to understand:

If we wanted to add read, write and execute to the user of a particular file, we would type the following ‘chmod u+rwx file.txt‘. If on the other hand you typed ‘chmod g-rw file.txt‘ you will take away read and write permissions of that file for the group .

While it’s not terribly difficult to modify the permissions of a file or directory, remembering all the flags can be hard. Thankfully there’s another way, which is less complicated and much faster. By replacing the permissions with numbers, we are able to calculate the required permissions and simply enter the correct sum of various numbers instead of the actual rights.

The way this works is simple. We are aware of three different permissions, Read (r), Write (w) and Execute (x). Each of these permissions is assigned a number as follows:

r (read) – 4

w (write) – 2

x (execute) – 1

Now, to correctly assign a permission, all you need to do is add up the level you want, so if you want someone to have read and write, you get 4+2=6, if you want someone to have just execute, it’s just 1.. zero means no permissions. You work out the number for each of the three sections (owner, group and everyone else).

If you want to give read write and execute to the owner and nothing to everyone else, you’d get the number 700. Starting from the left, the first digit (7) presents the permissions for the owner of the file, the second digit (0) is the permissions for the group, and the last (0) is the permissions for everyone else. You get the 7 by adding read, write and execute permissions according to the numbers assigned to each right as shown in the previous paragraphs: 4+2+1 = 7.

r, w, x Permissions
Calculated Number

0
–x
1
-w-
2
-wx
3 (2+1)
r–
4
r-x
5 (4+1)
rw-
6 (4+2)
rwx
7 (4+2+1)

If you want to give full access to the owner, only read and execute to the group, and only execute to everyone else, you’d work it out like this :

owner: rwx = 4 + 2 + 1 = 7

group: r-x = 4 + 0 + 1 = 5

everyone: –x = 0 + 0 + 1 = 1

So your number will be 751, 7 for owner, 5 for group, and 1 for everyone. The command will be ‘chmod 751 file.txt‘. It’s simple isn’t it ?

If you want to give full control to everyone using all possible combinations, you’d give them all ‘rwx’ which equals to the number ‘7’, so the final three digit number would be ‘777’:

linux-introduction-file-permissions-6

If on the other hand you decide not to give anyone any permission, you would use ‘000’ (now nobody can access the file, not even you!). However, you can always change the permissions to give yourself read access, by entering ‘chmod 400 file.txt’.

For more details on the ‘chmod’ command, please take a look at the man pages.

As we will see soon, the correct combination of user and group permissions will allow us to perform our work while keeping our data safe from the rest of the world.

For example in order for a user or group to enter a directory, they must have at least read (r) and execute (x) permissions on the directory, otherwise access to it is denied:

linux-introduction-file-permissions-7

As seen here, user ‘mailman‘ is trying to access the ‘red-bulb‘ directory which belongs to user ‘david‘ and group ‘sys‘. Mailman is not a member of the ‘sys‘ group and therefore can’t access it. At the same time the folder’s permissions allow neither the group nor everyone to access it.

Now, what we did is alter the permission so ‘everyone‘ has at least read and execute permissions so they are able to enter the folder – let’s check it out:

linux-introduction-file-permissions-8

Here we see the ‘mailman‘ user successfully entering the ‘red-bulb‘ directory because everyone has read (r) and execute (x) access to it!

The world of Linux permissions is pretty user friendly as long as you see from the right perspective :) Practice and reviewing the theory will certainly help you remember the most important information so you can perform your work without much trouble.

If you happen to forget something, you can always re-visit us – any time of the day!

Continuing on to our last page, we will provide you with a few links to some of the world’s greatest Linux resources, covering Windows to Linux migration, various troubleshooting techniques, forums and much more that will surely be of help.


FINDING MORE INFORMATION


Since this document merely scratches the surface when it comes to Linux, you will probably find you have lots of questions and possibly problems. Whether these are problems with the operating system, or not knowing the proper way to perform the task in Linux, there is always a place to find help.

On our forums you’ll find a lot of experienced people always willing to go that extra mile to help you out, so don’t hesitate to ask – you’ll be suprised at the responses!

Generally the Linux community is a very helpful one. You’ll be happy to know that there is more documentation, tutorials, HOW-TOs and FAQs (Frequently Asked Questions) for Linux than for all other operating systems in the world!

If you go to any search engine, forum or news group researching a problem, you’ll always find an answer.

To save you some searching, here are a few websites where you can find information covering most aspects of the operating system:

  • http://www.tldp.org/ – The Linux Documentation Project homepage has the largest collection of tutorials, HOW-TOs and FAQs for Linux.
  • http://www.Linux.org/docs/ – The documentation page from the official Linux.org website. Contains links to a lot of useful information.
  • http://fedora.redhat.com/docs/ – The Red Hat Fedora Linux manuals page. Almost all of this information will apply to any other version of Linux as well. All the guides here are full of very useful information. You can download all the guides to view offline.
  • http://www.justLinux.com/nhf/ – Contains a library of information for beginners on all topics from setting up hardware, installing software, to compiling the kernel
  • http://www.pbone.net – Pbone is a great search engine to find RPM packages for your Linux operating system.
  • http://www.freshmeat.net – Looking for an application in Linux? Try Freshmeat – if you don’t find it there, it’s most probably not out yet!
  • http://www.sourceforge.net – The world’s largest development and download repository of Open Source code (free) and applications. Sourceforge hosts thousands of open source projects, most of which are of course for the Linux operating system.

We hope you have enjoyed this brief introduction to the Linux operating system and hope you’ll be tempted to try Linux for yourself. You’ve surely got nothing to lose and everything to gain!

Remember, Linux is the No.1 operating system when it comes to web services and mission critical servers – it’s not a coincidence other major software vendors are doing everything they can to stop Linux from gaining more ground!

How To Update Linux Workstations And Operating Systems


Like any other software, an operating system needs to be updated. Updates are required not only because of the new hardware coming into the market, but also for improving the overall performance and taking care of security issues.

Updates are usually done in two distinct ways. One is called the incremental update, and the other is the major update. In the incremental updates, components of the operating system undergo minor modifications. Such modifications are usually informed to users over the net. Users can download and install the modifications serially using the update managing software.

However, some major modifications require so many changes involving several packages simultaneously, it becomes rather complicated to accomplish serially over the net. This type of modification is best done by a fresh installation, after acquiring the improved version of the operating system.

Package management is one of the most distinctive features distinguishing major Linux distributions. Major projects offer a graphical user interface where users can select a package and install it with a mouse click. These programs are front-ends to the low-level utilities to manage the tasks associated with installing packages on a Linux system. Although many desktop Linux users feel comfortable installing packages through these GUI tools, the command-line package management offers two excellent features not available in any graphical package management utility, and that is power and speed.

The Linux world is sharply divided into three major groups, each swearing by the type of package management they use – the “RPM” group, the “DEB” group and the “Slackware” group. There are other fragment groups using different package management types, but they are insignificantly minor in comparison. Among the three groups, RPM and DEB are by far the most popular and several other groups have been derived from them. Some of the Linux distributions that handle these package managements are:

RPM – RedHat Enterprise/Fedora/CentOS/OpenSUSE/Mandriva, etc.

DEB – Debian/Ubuntu/Mint/Knoppix, etc.


RPM – REDHAT PACKAGE MANAGER

Although RPM was originally used by RedHat, this package management is handled by different types of package management tools specific to each Linux distribution. While OpenSUSE uses the “zypp” package management utility, RedHat Enterprise Linux (REL),Fedora and CentOS use “yum”, and Mandriva and Mageia use “urpmi”.

Therefore, if you are an OpenSUSE user, you will use the following commands:

For updating your package list: zypper refresh

For upgrading your system: zypper update

For installing new software pkg: zypper install pkg (from package repository)

For installing new software pkg: zypper install pkg  (from package file)

For updating existing software pkg: zypper update -t package pkg

For removing unwanted software pkg: zypper remove pkg

For listing installed packages: zypper search -ls

For searching by file name: zypper wp file

For searching by pattern: zypper search -t pattern pattern

For searching by package name pkg: zypper search pkg

For listing repositories: zypper repos

For adding a repository: zypper addrepo pathname

For removing a repository: zypper removerepo name


If you are a Fedora or CentOS user, you will be using the following commands:

For updating your package list: yum check-update

For upgrading your system: yum update

For installing new software pkg: yum install pkg (from package repository)

For installing new software pkg: yum localinstall pkg (from package file)

For updating existing software pkg: yum update pkg

For removing unwanted software pkg: yum erase pkg

For listing installed packages: rpm -qa

For searching by file name: yum provides file

For searching by pattern: yum search pattern

For searching by package name pkg: yum list pkg

For listing repositories: yum repolist

For adding a repository: (add repo to /etc/yum.repos.d/)

For removing a repository: (remove repo from /etc/yum.repos.d/)


You may be a Mandriva or Mageia user, in which case, the commands you will use will be:

For updating your package list: urpmi update -a

For upgrading your system: urpmi –auto-select

For installing new software pkg: urpmi pkg (from package repository)

For installing new software pkg: urpmi pkg (from package file)

For updating existing software pkg: urpmi pkg

For removing unwanted software pkg: urpme pkg

For listing installed packages: rpm -qa

For searching by file name: urpmf file

For searching by pattern: urpmq –fuzzy pattern

For searching by package name pkg: urpmq pkg

For listing repositories: urpmq –list-media

For adding a repository: urpmi.addmedia name path

For removing a repository: urpmi.removemedia media


DEB – DEBIAN PACKAGE MANAGER

Debian Package Manager was introduced by Debian and later adopted by all derivatives of Debian – Ubuntu, Mint, Knoppix, etc. This is a relatively simple and standardized set of tools, working across all the Debian derivatives. Therefore, if you use any of the distributions managed by the DEB package manager, you will be using the following commands:

For updating your package list: apt-get update

For upgrading your system: apt-get upgrade

For installing new software pkg: apt-get install pkg (from package repository)

For installing new software pkg: dpkg -i pkg (from package file)

For updating existing software pkg: apt-get install pkg

For removing unwanted software pkg: apt-get remove pkg

For listing installed package: dpkg -l

For searching by file name: apt-file search path

For searching by pattern: apt-cache search pattern

For searching by package name pkg: apt-cache search pkg

For listing repositories: cat /etc/apt/sources.list

For adding a repository: (edit /etc/apt/sources.list)

For removing a repository: (edit /etc/apt/sources.list)

Installing And Configuring Linux Webmin – Linux Web-Based Administration


For many engineers and administrators,  maintaining a Linux system can be a daunting task, especially if there’s limited time or experience.  Working in shell mode, editing files, restarting services, performing installations, configuring scheduled jobs (Cron Jobs) and much more, requires time, knowledge and patience.

One of the biggest challenges for people who are new to Linux, is to work with the operating system in an easy and manageable way, without requiring to know all the commands and file paths in order to get the job done.

All this has now changed, and you can now do all the above, plus a lot more, with a few simple clicks through an easy-to-follow web interface.  Sounds too good to be true?  Believe it or not, it is true!  It’s time to get introduced to ‘Webmin’.

Webmin is a freeware program that provides web-based interface for system administration and is a system configuration tool for administrators. One of Webmin’s strongest points is that it is modular, which means there are hundreds of extra modules/addons that can be installed, to provide the ability to control additional programs or services someone might want to install on their Linux system.

Here are just a few of the features supported by Webmin, out of the box:

  • Setup and administer user accounts
  • Setup and administer groups
  • Setup and configure DNS services
  • Configure file sharing & related services (Samba)
  • Setup your Internet connection (including ADSL router, modem etc)
  • Configure your Apache webserver
  • Configure a FTP Server
  • Setup and configure an email server
  • Configure Cron Jobs
  • Mount, dismount and administer volumes, hdd’s and partitions
  • Setup system quotas for your users
  • Built-in file manager
  • Manage an OpenLDAP server
  • Setup and configure VPN clients
  • Setup and configure a DHCP Server
  • Configure a SSH Server
  • Setup and configure a Linux Proxy server (squid) with all supported options
  • Setup and configure a Linux Firewall
  • and much much more!!!

The great part is that webmin is supported on all Linux platforms and is extremely easy to install.  While our example is based on Webmin’s installation on a Fedora 16 server using the RPM package, these steps will also work on other versions such as Red Hat, CentOS and other Linux distributions.

Before we dive into Webmin, let’s take a quick look at what we’ve got covered:

  • Webmin Installation
  • Adding Users, Groups and Assigning Privileges
  • Listing and Working with File Systems on the System
  • Creating and Editing Disk Quotas for Unix Users
  • Editing the System Boot up, Adding and Removing Services
  • Managing and Examining System Log Files
  • Setting up and Changing System Timezone and Date
  • Managing DNS Server & Domain
  • Configuring DHCP Server and Options
  • Configuring FTP Server and Users/Groups
  • How to Schedule a Backup
  • Configuring CRON Jobs with Webmin
  • Configuring SSH Server with Webmin
  • Configuring Squid Proxy Server
  • Configuring Apache HTTP Server

INSTALLING WEBMIN ON LINUX FEDORA / REDHAT / CENTOS

Download the required RPM file from http://download.webmin.com/download/yum/ using the command (note the root status):

# wget http://download.webmin.com/download/yum/webmin-1.580-1.noarch.rpm

Install the RPM file of Webmin with the following command:

# rpm -Uvh webmin-1.580-1.noarch.rpm

Start Webmin service using the command:

# systemctl start webmin.service

You can now login to https://Fedora-16:10000/ as root with your root password. To ensure you are able to login into your webmin administration interface, simply use the following URL:  https://your-linux-ip:10000 , where “your-linux-ip” is your Linux server’s or workstation’s IP address.

RUNNING WEBMIN

Open Firefox or any other browser, and type the URL https://Fedora-16:10000/ :

linux-webmin-1

You will be greeted with a welcome screen. Login as root with your root password. Once you are logged in, you should see the system information:

linux-webmin-2

ADDING USERS, GROUPS AND ASSIGNING THEM PRIVILEGES

Expand the “System” Tab in the left column index, and select the last entry “Users and Groups”.  You will be shown the list of the “Local Users” on the system:

linux-webmin-3

You can add users or delete them from this window. If you want to change the parameters of any user, you can do so. By clicking on any user, you can see the groups and privileges assigned to them. These can be changed as you like. For example, if you select the user “root“, you can see all the details of the user as shown below :

linux-webmin-4

By selecting the adjacent tab in the “Users and Groups” window, you can see the “Local Groups” as well:

linux-webmin-5

Here, you can see the members in each group by selecting that group. You can delete a group or add a new one. You can select who will be the member of the group, and who can be removed from a group. For example, you can see all the members in the group “mem“, if you select and open it:

linux-webmin-6

Here, you will be allowed to create a new group or delete selected groups. You can also add users to the groups or delete them as required. If required, you can also change group ID on files and modify a group other modules as well.

LISTING AND WORKING WITH FILE SYSTEMS ON THE SYSTEM

By selecting “Disk and Network Filesystems” under the “System” tab on the left index, you can see the different file systems currently mounted.

linux-webmin-7

You can select other type of file system you would like to mount. Select it from the drop down menus as shown:

linux-webmin-8

By selecting a mounted file system, you can edit its details such as whether it should be mounted at boot time, left as mounted or unmount it now, check the file system at boot time. Mount options like read-only, executable, permissions can be set here.

CREATING AND EDITING DISK QUOTAS FOR UNIX USERS

Prior to Linux Installation, a major & key point in Linux Partition is the /home directory.

VHost is widely setup on almost all control panel mechanism on /home location, since Users & Groups, FTP server, User shell, Apache and several other directives are constructed on this /home partition. Therefore, home should be created as a Logical Volume on a Linux native File system (ext3). Here it is assumed there is already a /home partition on the system.

You can set the quotas by selecting “Disk & Network Filesystems” under “System”:

linux-webmin-9

This allows you to create and edit disk quota for the users in your /home partition or directory. Each user is given a certain amount of disk space he can use. Going close to filling up the quota will generally send a warning.

You can also edit other mounts such as the root directory “/” and also set a number of presented mount options:

linux-webmin-10

EDITING THE SYSTEM BOOT UP, ADDING AND REMOVING SERVICES

All Systemd services are neatly listed in the “Bootup and Shutdown” section within “System“:

linux-webmin-11

All service related functions such as start, stop, restart, start on boot, disable on boot, start now and on boot, and disable now and on boot are available at the bottom of the screen. This makes system bootup process modification a breeze, even for the less experienced:

linux-webmin-12

The “Reboot System” and “Shutdown System” function buttons are also located at the bottom, allowing the immediately reboot or shutdown the system.

MANAGING AND EXAMINING SYSTEM LOG FILES

Who would have thought managing system log files in Linux would be so easy? Webmin provides a dedicated section allowing the admnistrator to make a number of changes to the preferences of each system’s log file. The friendly interface will show you all available system log files and their location.  By clicking on the one of interest, you can see its properties and make the changes you require.

The following screenshot shows the “System Logs” listed in the index under “System” menu option:

linux-webmin-13

All the logs are available for viewing and to editing. The screenshot below shows an example of editing the maillog. Through the interface, you can enable, disable logs and make a number of other changes on the fly:

linux-webmin-14

Another entry under “System” is the important function of “Log File Rotation“. This allows you to edit which log file you would like to rotate and how (daily, weekly or monthly). You can define what command will be executed after the log rotation is done. You can also delete the selected log rotations:

linux-webmin-15

Log rotation is very important, especially on a busy system as it will ensure the log files are kept to a reasonable and manageable size.

SETTING UP AND CHANGING SYSTEM TIMEZONE AND DATE

Webmin also supports setting up system time and date. To do so, you will have to go to “System Time” under “Hardware” in the main menu index.

linux-webmin-16

System time and hardware time can be separately set and saved. These can be made to match if required.

On the next tab you will be able to change the Timezone:

linux-webmin-17

The next tab is the ‘Time Server Sync‘, used for synchronizing to a time-server. This will ensure your system is always in sync with the selected time-server:

linux-webmin-18

Here, you will be able to select a specific timeserver with a hostname or address and set the schedule when the periodic synchronizing will be done.

MANAGING DNS SERVER & DOMAIN

DNS Server configuration is possible from the “Hostname and DNS Client“, which is located under “Networking Configuration” within “Networking” in the index:

linux-webmin-19

Here you can set the Hostname of the machine, the IP Address of the DNS Servers and their search domains and save them.

CONFIGURING DHCP SERVER AND OPTIONS

For configuration of your system’s DHCP server, go to “DHCP Server” within “System and Server Status” under “Others”:

linux-webmin-20

All parameters related to DHCP server can be set here:

linux-webmin-21

CONFIGURING FTP SERVER AND USERS/GROUPS

For ProFTPD Server, select “ ProFTPD Server” under “Servers”. You will see the main menu for ProFTPD server:

linux-webmin-22

You can see and edit the Denied FTP Users if you select the “Denied FTP Users“:

linux-webmin-23

Configuration file at /etc/proftpd.conf can be directly edited if you select the “Edit Config Files” in the main menu:

linux-webmin-24

HOW TO SCHEDULE A BACKUP

Whatever the configuration files you would like to backup, schedule and restore, can be done from “Backup Configuration Files” under “Webmin”.

In the “Backup Now” window, you can set the modules, the backup destination, and what you want included in the backup.   The backup can be a local file on the system, a file on an FTP server, or a file on an SSH server. For both the servers, you will have to provide the username and password. Anything else that you would like to include during the backup such as webmin module configuration files, server configuration files, or other listed files can also be mentioned here:

linux-webmin-25

If you want to schedule your Backups go to the next tab “Scheduled Backups” and select the “Add a new scheduled backup”, since, as shown, no scheduled backup has been defined yet:

linux-webmin-26

linux-webmin-27

And set the exact backup schedule options. The information is nearly same as that for the Backup Now. However, now you have the choice for setting the options for the schedule, such as Months, Weekdays, Days, Hours, Minutes and Seconds.

linux-webmin-28

Restoration of modules can be selected from the “Restore Now” tab:

linux-webmin-29

The options for restore now follow the same pattern as for the backup. You have the options for restoring from a local file, an FTP server, an SSH server, and an uploaded file. Apart from providing the username and passwords for the servers, you have the option of only viewing what is going to be restored, without applying the changes.

CONFIGURING CRON JOBS WITH WEBMIN

Selecting the “Scheduled Cron Jobs” under “System” will allow creation, deletion, disabling and enabling of Cron jobs, as well as controlling user access to cron jobs. The interface also shows the users who are active and their current cron-jobs. The jobs can be selectively deleted, disabled or enabled (if disabled earlier).

linux-webmin-30

For creating a new cron job and scheduling it, select the tab “Create a new scheduled cron job”. You have the options of setting the Months, Weekdays, Days, Hours, Minutes. You have the option of running the job on any date, or running it only between two fixed dates:

linux-webmin-31

For controlling access to Cron jobs, select the next tab “Control User Access to Cron Jobs” in the main menu:

linux-webmin-32

CONFIGURING SSH SERVER WITH WEBMIN

Selecting “SSH Server” under “Servers” will allow all configuration of the SSH Server:

linux-webmin-33

Access Control is provided by selecting the option “Access Control” from the main menu :

linux-webmin-34

Miscellaneous options are available when the “Miscellaneous Options” is selected from the main menu:

linux-webmin-35

The SSH config files can be accessed directly and edited by selecting “Edit Config Files” from the main menu.

linux-webmin-36

CONFIGURING SQUID PROXY SERVER

Select “Squid Proxy Server” under “Servers”. The main menu shows what all can be controlled there:

linux-webmin-37

The Access Control allows ACL, Proxy restrictions, ICP restrictions, External ACL programs, and Reply proxy restrictions, when you select “Access Control”:

linux-webmin-38

linux-webmin-39

CONFIGURING APACHE HTTP SERVER

You can configure “Apache Webserver” under “Servers”. The main menu shows what you can configure there.

All Global configuration can be done from the first tab:

linux-webmin-40

You can also configure the existing virtual hosts or create a virtual host, if you select the other tabs:

linux-webmin-41

Users and Groups who are allowed to run Apache are mentioned here (select from the main menu):

linux-webmin-42

Apache configuration files can be directly edited from the main menu.

All the configuration files, httpd.conf, sarg.conf, squid.conf, and welcome.conf can be directly edited from this interface:

linux-webmin-43

Any other service or application, which you are not able to locate directly from the index on the left, can be searched by entering in the search box on the left. If the item searched is not installed, Webmin will offer to download the RPM and install it. A corresponding entry will appear in the index on the left and you can proceed to configure the service or application. After installing an application or service, modules can be refreshed as well. From the Webmin interface, you can also view the module’s logs.

Working With Linux TCP / IP Network Configuration Files


This article covers the main TCP/IP network configuration files used by Linux to configure various network services of the system such as IP Address, Default Gateway, Name servers – DNS, hostname and much more.  Any Linux Administrator must be well aware where these services are configured and to use them. The good news is that most of the information provided on this article apply’s to Redhat Fedora, Enterprise Linux, CentOS, Ubunto and other similar Linux distributions.

On most Linux systems, you can access the TCP/IP connection details within ‘X Windows‘ from Applications > Others > Network Connections. The same may also be reached through Application > System Settings > Network > Configure. This opens up a window, which offers configuration of IP parameters for wired, wireless, mobile broadband, VPN and DSL connections:

linux-tcpip-config-1

The values entered here modify the files:

           /etc/sysconfig/network-scripts/ifcfg-eth0

           /etc/sysconfig/networking/devices/ifcfg-eth0

           /etc/resolv.conf

           /etc/hosts

The static host IP assignment is saved in /etc/hosts

The DNS server assignments are saved in the /etc/resolv.conf

IP assignments for all the devices found on the system are saved in the ifcfg-<interface> files mentioned above.

If you want to see all the IP assignments, you can run the command for interface configuration:

# ifconfig

Following is the output of the above command:

[root@gateway ~]# ifconfig

eth0    Link encap:Ethernet  HWaddr 00:0C:29:AB:21:3E
inet addr:192.168.1.18  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feab:213e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:1550249 errors:0 dropped:0 overruns:0 frame:0
TX packets:1401847 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:167592321 (159.8 MiB)  TX bytes:140584392 (134.0 MiB)
Interrupt:19 Base address:0x2000

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:71833 errors:0 dropped:0 overruns:0 frame:0
TX packets:71833 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:12205495 (11.6 MiB)  TX bytes:12205495 (11.6 MiB)

The command ifconfig is used to configure a network interface. It can be used to set up the interface parameters that are used at boot time. If no arguments are given, the command ifconfig displays the status of the currently active interfaces. If you want to see the status of all interfaces, including those that are currently down, you can use the argument -a, such as –

# ifconfig -a

Fedora, Redhat Enterprise Linux, CentOS and other similar distributions supports user profiles as well, with different network settings for each user. The user profile and its parameters are set by the network-configuration tools. The relevant system files are placed in:

/etc/sysconfig/netwroking/profiles/profilename/

After boot-up, to switch to a specific profile you have to access a graphical tool, which will allow you to select from among the available profiles. You will have to run:

$ system-config-network

Or for activating the profile from the command line –

$ system-config-network-cmd -p <profilename> –activate

THE BASIC COMMANDS FOR NETWORKING

The basic commands used in Linux are common to every distro:

ifconfig – Configures and displays the IP parameters of a network interface

route – Used to set static routes and view the routing table

hostname – Necessary for viewing and setting the hostname of the system

netstat – Flexible command for viewing information about network statistics, current connections, listening ports

arp – Shows and manages the arp table

mii-tool – Used to set the interface parameters at data link layer (half/full duplex, interface speed, autonegotiation, etc.)

Many distro are now including the iproute2 tools with enhanced routing and networking tools:

ip – Multi-purpose command for viewing and setting TCP/IP parameters and routes.

tc – Traffic control command, used  for classifying, prioritizing, sharing, and limiting both inbound and outbound traffic.

TYPES OF NETWORK INTERFACE

LO (local loop back interface). Local loopback interface is recognized only internal to the computer, the IP address is usually 127.0.0.1 or 127.0.0.2.

Ethernet cards are used to connect to the world external to the computer, usually named eth0, eth1, eth2 and so on.

Network interface files holding the configuration of LO and ethernet are:

/etc/sysconfig/nework-scripts/ifcfg-lo

/etc/sysconfig/nework-scripts/ifcfg-eth0

To see the contents of the files use the command:

# less /etc/sysconfig/network-scripts/ifcfg-lo

Which results in:

DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you’re having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback

And the following:

# less /etc/sysconfig/network-scripts/ifcfg-eth0

Which gives the following results:

DEVICE=”eth0″
NM_CONTROLLED=”yes”
ONBOOT=yes
HWADDR=00:0C:29:52:A3:DB
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.1.18
PREFIX=24
GATEWAY=192.168.1.11
DNS1=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03

START AND STOP THE NETWORK INTERFACE CARD

The ifconfig command can be used to start and stop network interface cards:

# ifconfig eth0 up
# ifconfig eth0 down

The ifup & ifdown command can also be used to start and stop network interface cards:

# ifup eth0
# ifdown eth0

The systemctl commands can also be used to enable, start, stop, restart and check the status of the network interface services –

# systemctl enable network.service
# systemctl start network.service
# systemctl stop network.service
# systemctl restart network.service
# systemctl status network.service

DISPLAYING AND CHANGING YOUR SYSTEM’S HOSTNAME

The command hostname displays the current hostname of the computer, which is ‘Gateway’:

# hostname
Gateway

You can change the hostname by giving the new name at the end of the command –

# hostname Firewall-cx

This will change to the new hostname once you have logged out and logged in again. In fact, for any change in the interfaces, the change is implemented only after the user logs in the next time after a log-out.

This concludes our Linux Network Configuration article.

Understanding The Linux Init Process And Different Runlevels


Different Linux systems can be used in many ways. This is the main idea behind operating different services at different operating levels. For example, the Graphical User Interface can only be run if the system is running the X-server; multiuser operation is only possible if the system is in a multiuser state or mode, such as having networking available. These are the higher states of the system, and sometimes you may want to operate at a lower level, say, in the single user mode or the command line mode.

Such levels are important for different operations, such as for fixing file or disk corruption problems, or for the server to operate in a run level where the X-session is not required. In such cases having services running that depend on higher levels of operation, makes no sense, since they will hamper the operation of the entire system.

Each service is assigned to start whenever its run level is reached. Therefore, when you ensure the startup process is orderly, and you change the mode of the machine, you do not need to bother about which service to manually start or stop.

The main run-levels that a system could use are:

RunLevel Target Notes
0 runlevel0.target, poweroff.target Halt the system
1 runlevel1.target,  rescue.target Single user mode
2, 4 runlevel2.target, runlevel4.target, multi-user.target User-defined/Site-specific runlevels.By default, identical to 3
3 runlevel3.target,multi-user.target Multi-user, non-graphical. Users canusually login via multiple consoles orvia the network.
5 runlevel5.target, graphical.target Multi-user, graphical. Usually has allthe services of runlevel3 plus agraphical login – X11
6 runlevel6.target, reboot.target Reboot
Emergency emergency.target Emergency shell

The system and service manager for Linux is now “systemd”. It provides a concept of “targets”, as in the table above. Although targets serve a similar purpose as runlevels, they act somewhat differently. Each target has a name instead of a number and serves a specific purpose. Some targets may be implemented after inheriting all the services of another target and adding more services to it.

Backward compatibility exists, so switching targets using familiar telinit RUNLEVEL command still works. On Fedora installs, runlevels 0, 1, 3, 5 and 6 have an exact mapping with specific systemd targets. However, user-defined runlevels such as 2 and 4 are not mapped that way. They are treated similar to runlevel 3, by default.

For using the user-defined levels 2 and 4, new systemd targets have to be defined that makes use of one of the existing runlevels as a base. Services that you want to enable have to be symlinked into that directory.

The most commonly used runlevels in a currently running linux box are 3 and 5. You can change runlevels in many ways.

A runlevel of 5 will take you to GUI enabled login prompt interface and desktop operations. Normally by default installation, this would take your to GNOME or KDE linux environment. A runlevel of 3 would boot your linux box to terminal mode (non-X) linux box and drop you to a terminal login prompt. Runlevels 0 and 6 are runlevels for halting or rebooting your linux respectively.

Although compatible with SysV and LSB init scripts, systemd:

  • Provides aggressive parallelization capabilities.
  • Offers on-demand starting of daemons.
  • Uses socket and D-Bus activation for starting services.
  • Keeps track of processes using Linux cgroups.
  • Maintains mount and automount points.
  • Supports snapshotting and restoring of the system state.
  • Implements an elaborate transactional dependency-based service control logic.

Systemd starts up and supervises the entire operation of the system. It is based on the notion of units. These are composed of a name, and a type as shown in the table above. There is a matching configuration file with the same name and type. For example, a unit avahi.service will have a configuration file with an identical name, and will be a unit that encapsulates the Avahi daemon. There are seven different types of units, namely, service, socket, device, mount, automount, target, and snapshot.

To introspect and or control the state of the system and service manager under systemd, the main tool or command is “systemctl”. When booting up, systemd activates the default.target. The job of the default.target is to activate the different services and other units by considering their dependencies. The ‘system.unit=’ command line option parses arguments to the kernel to override the unit to be activated. For example,

systemd.unit=rescue.target is a special target unit for setting up the base system and a rescue shell (similar to run level 1);

systemd.unit=emergency.target, is very similar to passing init=/bin/sh but with the option to boot the full system from there;

systemd.unit=multi-user.target for setting up a non-graphical multi-user system;

systemd.unit=graphical.target for setting up a graphical login screen.

HOW TO ENABLE/DISABLE LINUX SERVICES

Following are the commands used to enable or disable services in CentOS, Redhat Enterprise Linux and Fedora systems:

Activate a service immediately e.g postfix:

[root@gateway ~]# service postfix start
Starting postfix: [  OK  ]

To deactivate a service immediately e.g postfix:

[root@gateway ~]# service postfix stop
Shutting down postfix: [  OK  ]

To restart a service immediately e.g postfix:

[root@gateway ~]# service postfix restart
Shutting down postfix: [FAILED]
Starting postfix: [  OK  ]

You might have noticed the ‘FAILED’ message. This is normal behavior as we shut down the postfix service with our first command (service postfix stop), so shutting it down a second time would naturally fail!

DETERMINE WHICH LINUX SERVICES ARE ENABLED AT BOOT

The first column of this output is the name of a service which is currently enabled at boot. Review each listed service to determine whether it can be disabled.

If it is appropriate to disable a service , do so using the command:

[root@gateway ~]# chkconfig -level servicename off

Run the following command to obtain a list of all services programmed to run in the different Run Levels of your system:

[root@gateway ~]#  chkconfig –list | grep :on

NetworkManager  0:off   1:off   2:on    3:on    4:on    5:on    6:off
abrtd           0:off   1:off   2:off   3:on    4:off   5:on    6:off
acpid           0:off   1:off   2:on    3:on    4:on    5:on    6:off
atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off
auditd          0:off   1:off   2:on    3:on    4:on    5:on    6:off
autofs          0:off   1:off   2:off   3:on    4:on    5:on    6:off
avahi-daemon    0:off   1:off   2:off   3:on    4:on    5:on    6:off
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off
crond           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cups            0:off   1:off   2:on    3:on    4:on    5:on    6:off
haldaemon       0:off   1:off   2:off   3:on    4:on    5:on    6:off
httpd           0:off   1:off   2:off   3:on    4:off   5:off   6:off
ip6tables       0:off   1:off   2:on    3:on    4:on    5:on    6:off
iptables        0:off   1:off   2:on    3:on    4:on    5:on    6:off
irqbalance      0:off   1:off   2:off   3:on    4:on    5:on    6:off

Several of these services are required, but several others might not serve any purpose in your environment, and use CPU and memory resources that would be better allocated to applications. Assuming you don’t RPC services, autofs or NFS, they can be disabled for all Run Levels using the following commands:

[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 nfslock off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 netfs off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcgssd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcidmapd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 autofs off

HOW TO CHANGE RUNLEVELS

You can switch to runlevel 3 by running:

[root@gateway ~]# systemctl isolate multi-user.target

(or)

[root@gateway ~]# systemctl isolate runlevel3.target

You can switch to runlevel 5 by running:

[root@gateway ~]# systemctl isolate graphical.target

(or)

[root@gateway ~]# systemctl isolate runlevel5.target

HOW TO CHANGE THE DEFAULT RUNLEVEL USING SYSTEMD

The systemd uses symlinks to point to the default runlevel. You have to delete the existing symlink first, before you can create a new one:   [root@gateway ~]# rm /etc/systemd/system/default.target

Switch to runlevel 3 by default:

[root@gateway ~]# ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target  

Switch to runlevel 5 by default:

[root@gateway ~]# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

And just in case you were wondering, systemd does not use the classic /etc/inittab file.

HOW TO CHANGE THE DEFAULT RUNLEVEL USING THE INITTAB FILE

There’s the Systemd way and of course, the Inittab way. In this case, Runlevels are represented by /etc/inittab text file. The default runlevel is always specified from /etc/inittab text file.

To change the default runlevel in fedora ,edit /etc/inittab and find the line that looks like this:  id:5:initdefault:

The number 5 represents a runlevel with X enabled (GNOME/KDE mostly). If you want to change to runlevel 3, simply change this

id:5:initdefault:
to this
id:3:initdefault:
Save and reboot your linux box. Your linux box would now reboot on runlevel 3, a runlevel without X or GUI. Avoid changing the default/etc/iniittab runlevel value to 0 or 6

How To Secure Your Linux Server Or Workstation – Best Security Practices


BOOT DISK

One of the foremost requisites of a secure Linux server is the boot disk. Nowadays, this has become rather simple as most Linux distributions are on bootable CD/DVD/USB sticks. Other options are, to use rescue disks such as the ‘TestDisk’, ‘SystemRescueCD’, ‘Trinity Rescue Kit’ or ‘Ubuntu Rescue Remix’. These will enable you to gain access to your system, if you are unable to gain entry, and also to recover files and partitions if your system is damaged. They can be used to check for virus attacks and to detect rootkits.

Next requirement is for patching your system. Distributions issue notices for security updates, and you can download and patch your system using these updates. RPM users can use the ‘up2date’ command, which automatically resolves dependencies, rather than the other rpm commands, since these only report dependencies and do not help to resolve them.

PATCH YOUR SYSTEM

While RedHat/CentOS/Fedora users can patch their systems with a single command, ‘yum update‘,   Debian users can patch their systems with the ‘sudo apt-get update’ command, which will update the sources list. This should be followed by the command ‘sudo apt-get upgrade’, which will install the newest version of all packages on the machine, resolving all the dependencies automatically.

New vulnerabilities are being discovered all the time, and patches follow. One way to learn about new vulnerabilities is to subscribe to the mailing list of the distribution used.

DISABLE UNNECESSARY SERVICES

Your system becomes increasingly insecure as you operate more services, since every service has its own security issues. For improving the overall system performance and for enhancing security, it is important to detect and eliminate unnecessary running services. To know which services are currently running on your system, you can use commands like:

[root@gateway~]# ps aux

Following is an example output of the above command:

[root@gateway~]# ps aux
USER       PID   %CPU    %MEM    VSZ     RSS TTY    STAT START   TIME COMMAND
root         1        0.0           0.1   2828    1400 ?       Ss   Feb08   0:02 /sbin/init
root         2        0.0           0.0      0           0 ?        S    Feb08   0:00 [kthreadd]
root         3        0.0           0.0      0           0 ?        S    Feb08   0:00 [migration/0]
root         4        0.0           0.0      0           0 ?        S    Feb08   0:00 [ksoftirqd/0]
root         5        0.0           0.0      0           0 ?        S    Feb08   0:00 [watchdog/0]
root         6        0.0           0.0      0           0 ?        S    Feb08   0:00 [events/0]
root         7        0.0           0.0      0           0 ?        S    Feb08   0:00 [cpuset]
root         8        0.0           0.0      0           0 ?        S    Feb08   0:00 [khelper]
root         9        0.0           0.0      0           0 ?        S    Feb08   0:00 [netns]
root        10       0.0           0.0      0           0 ?        S    Feb08   0:00 [async/mgr]
root        11       0.0           0.0      0           0 ?        S    Feb08   0:00 [pm]
root        12       0.0           0.0      0           0 ?        S    Feb08   0:00 [sync_supers]
apache   17250  0.0           0.9  37036 10224 ?        S    Feb08   0:00 /usr/sbin/httpd
apache   25686  0.0           0.9  37168 10244 ?        S    Feb08   0:00 /usr/sbin/httpd
apache   28290  0.0           0.9  37168 10296 ?        S    Feb08   0:00 /usr/sbin/httpd
postfix   30051  0.0            0.2  10240  2136 ?        S    23:35   0:00 pickup -l -t fifo -u
postfix   30060  0.0            0.2  10308  2280 ?        S    23:35   0:00 qmgr -l -t fifo -u
root      31645  0.1             0.3  11120  3112 ?        Ss   23:45   0:00 sshd: root@pts/1

The following command will list all start-up scripts for RunLevel 3 (Full multiuser mode):

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*  
OR
[root@gateway~]# ls -l /etc/rc3.d/S* 

Here is an example output of the above commands:

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*
lrwxrwxrwx. 1 root root 23 Jan 16 17:45 /etc/rc.d/rc3.d/S00microcode_ctl -> ../init.d/microcode_ctl
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S01sysstat -> ../init.d/sysstat
lrwxrwxrwx. 1 root root 22 Jan 16 17:44 /etc/rc.d/rc3.d/S02lvm2-monitor -> ../init.d/lvm2-monitor
lrwxrwxrwx. 1 root root 19 Jan 16 17:39 /etc/rc.d/rc3.d/S08ip6tables -> ../init.d/ip6tables
lrwxrwxrwx. 1 root root 18 Jan 16 17:38 /etc/rc.d/rc3.d/S08iptables -> ../init.d/iptables
lrwxrwxrwx. 1 root root 17 Jan 16 17:42 /etc/rc.d/rc3.d/S10network -> ../init.d/network
lrwxrwxrwx. 1 root root 16 Jan 27 01:04 /etc/rc.d/rc3.d/S11auditd -> ../init.d/auditd
lrwxrwxrwx. 1 root root 21 Jan 16 17:39 /etc/rc.d/rc3.d/S11portreserve -> ../init.d/portreserve
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S12rsyslog -> ../init.d/rsyslog
lrwxrwxrwx. 1 root root 18 Jan 16 17:45 /etc/rc.d/rc3.d/S13cpuspeed -> ../init.d/cpuspeed
lrwxrwxrwx. 1 root root 20 Jan 16 17:40 /etc/rc.d/rc3.d/S13irqbalance -> ../init.d/irqbalance
lrwxrwxrwx. 1 root root 17 Jan 16 17:38 /etc/rc.d/rc3.d/S13rpcbind -> ../init.d/rpcbind
lrwxrwxrwx. 1 root root 19 Jan 16 17:43 /etc/rc.d/rc3.d/S15mdmonitor -> ../init.d/mdmonitor
lrwxrwxrwx. 1 root root 20 Jan 16 17:38 /etc/rc.d/rc3.d/S22messagebus -> ../init.d/messagebus

To disable services, you can either stop a running service or change the configuration in a way that the service will not start on the next reboot. To stop a running service, RedHat/CentOS users can use the command –

 [root@gateway~]# service service-name stop

The example below shows the command used to stop our Apache web service (httpd):

[root@gateway~]# service httpd stop
Stopping httpd: [  OK  ]

In order to stop the service from starting up at boot time, you could use –

  [root@gateway~]# /sbin/chkconfig –levels 2345 service-name off  

Where ‘service-name‘ is replaced by the name of the service. e.g httpd    

You can also remove a service from the startup script by using the following commands which will remove the httpd (Apache Web server) service:

  [root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd

or

  [root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

During startup on of the Linux operating system, the rc program looks in the /etc/rc.d/rc3.d directory (when configured with Runlevel 3),  executing any K* scripts with an option of stop. Then, all the S* scripts are started with an option of start. Scripts are started in numerical order—thus, the S08iptables script is started before the S85httpd script. This allows you to choose exactly when your script starts without having to edit files. The same rule applies with the K* scripts.

In some rare cases, services may have to be removed from /etc/xinetd.d or /etc/inetd.conf file.

Debian users can use the following commands to stop, start and restart a service –

$ sudo service httpd stop
$ sudo service httpd start
$ sudo service httpd restart

Remove the startup script by –

[root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd

or

[root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

HOST-BASED FIREWALL PROTECTION WITH IPTABLES

Using iptables firewall, you could limit access to your server by IP address or by host/domain name. RedHat/CentOS users have a file/etc/sysconfig/iptables based on the services that were ‘allowed’ during installation. The file can be edited to accept some services and block others. In case the requested service does not match any of the ACCEPT lines in the iptables file, the packet is logged and then rejected.

RedHat/CentOS/Fedora users will have to install the iptables with:

[root@gateway~]# yum install iptables

Debian users will need to install the iptables with the help of:

$ sudo apt-get install iptables

Then use the iptables command line options/switches to implement the policy. The rules of iptables usually take the form:
•    INIVIDUAL REJECTS FIRST
•    THEN OPEN IT UP
•    BLOCK ALL

As it is a table of rules, the first rule takes precedence. If the first rule dis-allows everything nothing else following later will matter.

In practice, a firewall script is needed which is created using the following sequence:
1) Create your script
2) Make it executable
3) Run the script

Following are the commands used for the above order:

[root@gateway~]# vim /root/firewall.sh  
[root@gateway~]# chmod 755 /root/firewall.sh 
[root@gateway~]# /root/firewall.sh 

Updating the firewall script is simply a matter of re-editing to make the necessary changes and running it again. Since iptables does not run as a daemon, instead of stopping, the rules are only flushed with the ‘-F‘ option:

[root@gateway~]# iptables -F INPUT
[root@gateway~]# iptables -F OUTPUT
[root@gateway~]# iptables -F FORWARD
[root@gateway~]# iptables -F POSTROUTING -t nat
[root@gateway~]# iptables -F PREROUTING -t nat

At startup/reboot, all that is needed is to execute the script to flush the iptables rules. The simplest way to do this is to add the script (/root/firewall.sh) to the file /etc/rc.local file.

BEST PRACTICES

Apart from the above, a number of steps need to be taken to keep your Linux server safe from outside attackers. Key files should be checked for security and must be set to root for both owner and group:

/etc/fstab
/etc/passwd
/etc/shadow
/etc/group

The above should be owned by root and and their permission must be 644 (rw-r–r–), except /etc/shadow which should have the permission of 400 (r——–).

You can read more on how to set permissions on your Linux files in our Linux File & Folder Permissions article

LIMITING ROOT ACCESS

Implement a password policy, which forces users to change their login passwords, for example, every 60 to 90 days, starts warning them within 7 days of expiry, and accepts passwords that are a minimum of 14 characters in length.

Root access must be limited by using the following commands for RedHat/CentOS/Fedora –

[chris@gateway~]$ su –
Password: <enter root password>
[root@gateway ~]#

Or for RedHat/CentOS/Fedora/Debian,

[chris@gateway~]$ sudo -i
Password: <enter root password>
[root@gateway ~]#

Provide the password of the user, who can assume root privileges.

Only root should be able to access CRON. Cron is a system daemon used to execute desired tasks (in the background) at designated times.

A crontab is a simple text file with a list of commands meant to be run at specified times. It is edited with a command-line utility. These commands (and their run times) are then controlled by the cron daemon, which executes them in the system background. Each user has a crontab file which specifies the actions and times at which they should be executed, these jobs will run regardless of whether the user is actually logged into the system. There is also a root crontab for tasks requiring administrative privileges. This system crontab allows scheduling of systemwide tasks (such as log rotations and system database updates). You can use the man crontab command to find more information about it.

Lastly, the use of SSH is recommended instead of Telnet for remote accesses. The main difference between the two is that SSH encrypts all data exchanged between the user and server, while telnet sends all data in clear-text, making it extremely easy to obtain root passwords and other sensitive information. All unused TCP/UDP ports must also be blocked using IPtables.

Understanding Administring Linux Groups And User Accounts


In a multi-user environment like Linux, every file is owned by a user and a group. There can be others as well who may be allowed to work with the file. What this means is, as a user, you have all the rights to read, write and execute a file created by you. Now, you may belong to a group, so you can give your group members the permission to either read, write (modify) and/or execute your file. In the same way, for those who do not belong to your group, and are called ‘others’, you may give similar permissions.

How are these permissions shown and how are they modified?

In a shell, command line or within a terminal, if you type ‘ls -l‘, you will see something like –

drwxr-x— 3 tutor firewall  4096 2010-08-21 15:52 Videos
-rwxr-xr-x 1 tutor firewall    21 2010-05-10 10:02 Doom-TNT

The last group of words on the right is the name of the file or directory. Therefore, ‘Videos‘ is a directory, which is designated by the ’d’ at the start of the line. Since ‘Doom-TNT‘ shows only a ‘‘, at the start of the line, it is a file. The following series of ‘rwx…‘ are the permissions of the file or directory. You will notice that there are three sets of ‘rwx‘. The first three rwx are the read, write and execute permissions for the owner ‘tutor‘.

Since the r, w and x are present, it means the owner has all the permissions. The next set of ‘rwx‘ is permissions for the group, which is the second ‘username‘. You will notice that the ‘w‘ here is missing, and is replaced by a ‘‘. This means group members of the group ‘username‘ have permissions to read and to execute ‘Doom-TNT‘, but cannot write to it or modify it. Permission for ‘others‘ is the same. Therefore, others can also read and execute the file, but not write to it or modify it. Others do not have any permissions for the directory ‘Videos‘ and hence cannot read (enter), modify or execute ‘Videos‘.

You can use the ‘chmod‘ command to change the permissions you give. The basic form of the command looks like:

chmod ‘who’+/-‘permissions’ ‘filename

Here, the ‘filename‘ is the file, whose permissions are being modified. You are giving the permissions to ‘who‘, and ‘who‘ can be u=user (meaning you), g=group, o=others, or a=all.

The ‘permissions‘ you give can be r=read, w=write, x=execute or ‘space‘ for no permissions. Using a ‘+‘ grants the permission, and a ‘‘ removes it.

As an example, the command ‘chmodo+r Videos‘ will result in:

drwxr-xr– 3 username  4096 2010-08-21 15:52 Videos

and now ‘others‘ can read ‘Videos‘. Similarly, ‘chmod o-r Videos‘, will set it back as it was, before the modification.

Linux file and folder permissions are covered extensively on our dedicated Linux File & Folder permissions article.

WHAT HAPPENS IN A GUI ENVIRONMENT?

If you are using a file manager like Nautilus, you will find a ‘view‘ menu, which has an entry ‘Visible Columns‘. This opens up another window showing the visible columns that you can select to allow the file manager to show. You will find there are columns like ‘Owner‘, ‘Group‘ and ‘Permissions‘. By turning these columns ON, you can see the same information as with the ‘ls -l‘ command.

If you want to modify the permissions of any file from Nautilus, you will have to right-click on the file with your mouse. This will open up a window through which you can access the ‘properties’ of the file. In the properties window, you can set or unset any of the permissions for owner, group and others.

WHAT ARE GROUP IDS?

Because Linux is a multi-user system, there could be several users logged in and using the system. The system needs to keep track of who is using what resources. This is primarily done by allocating identification numbers or IDs to all users and groups. To see the IDs, you may enter the command ‘id‘, which will show you the user ID, the group ID and the IDs of the groups to which you belong.

A standard Linux installation, for example Ubuntu, comes with some groups preconfigured. Some of these are:

4(adm), 20(dialout), 21(fax), 24(cdrom), 26(tape), 29(audio), 30(dip), 44(video), 46(plugdev), 104(fuse), 106(scanner), 114(netdev), 116(lpadmin), 118(admin), 125(sambashare)

The numbers are the group IDs and their names are given inside brackets. Unless you are a member of a specific group, you are not allowed to use that resource. For example, unless you belong to the group ‘cdrom’, you will not be allowed to access the contents of any CDs and DVDs that are mounted on the system.

In Linux, the ‘root‘ or ‘super user‘, also called the ‘administrator‘, is a user who is a member of all the groups and has all permissions in all places, unless specifically changed. Users who have been granted root privileges defined in the ‘sudoers‘ file, can assume root status temporarily with the ‘sudo‘ command

Understanding Linux File System Quotas – Installation And Setup


When you are running your own web hosting, it is important to monitor how much space is being used by each user. This is not a simple task to be done manually since one of the users or group could fill up the whole hard disk, preventing others from availing any space. Therefore, it is important to allow each user or group their own hard disk space called quota and locking them out from using more than what is allotted.

The system administrator sets a limit or a disk quota to restrict certain aspects of the file system usage on a Linux operating system. In multi-user environments, disk quotas are very useful since a large number of users have access to the file system. They may be logging into the system directly or using their disk space remotely. They may also be accessing their files through NFS or through Samba. If several users host their websites on your web space, you need to implement the quota system.

HOW TO INSTALL QUOTA

For installing a quota system, for example, in your Debian or RedHAT Linux system, you will need two tools called ‘quota’ and ‘quotatool’. At the time of installation of these tools, you will be asked if you wish to send daily reminders to users who are going over their quotas.

Now, the administrator also needs to know the users that are going over their quota. The system will send an email to this effect, therefore the email address of the administrator has to be inputted next.

In case the user does not know what to do if the system gives him a warning message, the next entry is the contact number of the administrator. This will be displayed to the user along with the warning message. With this, the quota system installation is completed.

At this time, a user and a group have to be created and proper permissions given. For creating, you have to assume root status, and type the following commands:

# touch /aquota.user /aquota.group
# chmod 600 /aquota.*

Next, these have to be mounted in the proper place on the root file system. For this, an entry has to be made in the ‘fstab’ file in the directory /etc. In the ‘fstab’ file, the root entry has to be modified with:

noatime,nodiratime,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

After this, the computer has to be rebooted, or the file system remounted with the command:

# mount -o remount /

The system is now able to work with disk quotas. However, you have to allow the system to build/rebuild its table of current disk usage. For this, you must first run quotacheck.

This will examine all the quota-enabled file systems, and build a table of the current disk usage for each one. The operating system’s copy of the disk usage is then updated. In addition, this creates the disk quota files for the entire file system. If the quota already existed, they are updated. The command looks like:

# quotacheck -avugm

Some explanation is necessary here. The (-a) tells the command that all locally mounted quota-enabled file systems are to be checked. The (-v) is to display the status information as the check proceeds. The (-u) is to enable checking the user disk quota information. The (-g) is to enable checking the group disk quota information. Finally, the (-m) tells the command not to try to remount file system read-only.

After checking and building the disk-quota files is over, the disk-quotas have to be turned on. This is done by the command ‘quotaon’ to inform the system that disk-quota should be enabled, such as:

# quotaon -avug

Here, (-a) forces all file systems in /etc/fstab to enable their quotas. The (-v) displays status information for each file system. The (-u) is for enabling the user quota. The (-g) enables the group quota.

DEFINE QUOTA FOR EACH USER/GROUP

Now that the system is ready with quotas, you can start defining what each user or group gets as his limit. Two types of limits can be defined. One is the soft limit and the other is the hard limit. To set the two limits try editing the size and inode size with:

# edquota -u $USER

This allows you to edit the line:

/dev/sda1            1024          200000         400000      1024        0        0

Here, the soft limit is 200000 (200MB) and the hard limit is 400000 (400MB). You may change it to suit your user (denoted by $USER).

The soft limit has a grace period of 7 days by default. It can be changed to days, hours, minutes, or seconds as desired by:

# edquota -t

This allows you to edit the line below. It has been modified to change the default to 15 minutes:

/dev/sda1                 15minutes                  7days

For editing group quota use:

# edquota -g $GROUP

QUOTA STATUS REPORT

Now that you have set a quota, it is easy to create a mini report on how much space a user has used. For this use the command:

root@gateway [~]# repquota  -a

*** Report for user quotas on device /dev/vzfs
Block grace time: 00:00; Inode grace time: 00:00
                            Block  limits                      File limits
User            used    soft    hard  grace    used  soft  hard  grace
———————————————————————-
root           — 5578244       0       0     117864     0     0
bin             —    30936       0       0          252     0     0
mail           —         76       0       0            19     0     0
nobody       —          0       0       0              3     0     0
mailnull      —     3356       0       0           157     0     0
smmsp       —          4       0       0              2     0     0
named       —       860       0       0            11     0     0
rpc            —           0       0       0             1      0     0
mailman    —    40396       0       0         2292     0     0
dovecot     —           4       0       0              1     0     0
mysql        —  181912       0       0           857     0     0
firewall      —    92023 153600 153600  21072   0     0
#55          —      1984       0       0             74     0     0
#200        —      1104       0       0             63     0     0
#501        —      6480       0       0            429    0     0
#506        —        648       0       0             80     0     0
#1000      —      7724       0        0           878     0     0
#50138    —    43044       0        0          3948     0     0

Once the user and group quotas are setup, it is simple to manage your storage. Thus, you do not allow users to hog all of the disk space. By using disk quotas, you force your users to be tidier, and users and groups of users will not fill their home directories with junk or old documents that are no longer needed.

Linux System Resource And Preformance Monitoring


You may be a user at home, a user in a LAN (local area network), or a system administrator of a large network of computers. Alternatively, you may be maintaining a large number of servers with multiple hard drives. Whatever may be your function, monitoring your Linux system is of paramount importance to keep it running in top condition.

While monitoring a complex computer system, some of the basic things to be kept in mind are the utilization of the hard disk, memory or RAM, CPU, the running processes, and the network traffic. Analysis of the information made available during monitoring is necessary, since all the resources are limited. Reaching the limits or exceeding them on any of the resources could lead to severe consequences, which may even be catastrophic.

MONITORING THE HARD DISK SPACE

Use a simple command like:

$ df -h
This results in the output:

Filesystem                Size          Used         Avail     Use%       Mounted on

/dev/sda1                 22G          5.0G          16G      24%         /

/dev/sda2                 34G           23G          9.1G     72%         /home

This shows there are two partitions (1 & 2) of the hard disk sda, which are currently at 24% and 72% utilization. The total size is shown in gigabytes (G). How much is used and balance available is shown as well. However, checking each hard disk to see the percentage used can be a big drag. It is better that the system checks the disks and informs you by email if there is a potential danger. Bash scripts may be written for this and run at specific times as a cron job.

For the GUI, there is a graphical tool called ‘Baobab’ for checking the disk usage. It shows how a disk is being used and displays the information in the form of either multicolored concentric rings or boxes.

MONITORING MEMORY USAGE

RAM or memory is used to run the current application. Under Linux, there are a number of ways you can check the used memory space — both in static and dynamic conditions.

For a static snapshot of the memory, use ‘free -m’ which results in the output:

$ free -m
                                   total           used       free     shared    buffers     cachedMem:                          1998           1896       101          0         59          605

-/+ buffers/cache:       1231            766

Swap:                          290             77         213


Here, the total amount of RAM is depicted in megabytes (MB), along with cache and swap. A somewhat more detailed output can be obtained by the command ‘vmstat’:

root@gateway [~]# vmstat
procs   ———–memory————-       —swap–   —–io—-    –system–  —–cpu——
r    b   swpd     free        buff  cache       si       so       bi    bo      in     cs    us  sy  id  wa  st
1   0      0       767932        0        0        0        0       10     3       0     1      2   0   97   0   0
root@gateway [~]#

However, if a dynamic situation of what is happening to the memory is to be examined, you have to use ‘top’ or ‘htop’. Both will give you a picture of which process is using what amount of memory and the picture will be updated periodically. Both ‘top’ and ‘htop’ will also show the CPU utilization, tasks running and their PID. Whereas ‘top’ has a purely numerical display, ‘htop’ is somewhat more colorful and has a semi-graphic look. There is also a list of command menus at the bottom for set up and specific operations.

root@gateway [~]# top

top – 01:04:18 up 81 days, 11:05,  1 user,  load average: 0.08, 0.28, 0.33
Tasks:  47 total,   1 running,  45 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.4%us,  0.4%sy,  0.0%ni, 96.7%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1048576k total,   261740k used,   786836k free,        0k buffers
Swap:            0k total,            0k used,            0k free,        0k cached

PID    USER       PR  NI  VIRT  RES  SHR S  %CPU   %MEM    TIME+  COMMAND
1   root         15   0  10372  736  624 S   0.0       0.1        1:41.86     init
5407   root         18   0  12424  756  544 S   0.0       0.1        0:13.71    dovecot
5408   root         15   0  19068 1144  892 S  0.0       0.1        0:12.09    dovecot-auth
5416   dovecot   15   0  38480 2868 2008 S  0.0       0.3        0:10.80    pop3-login
5417   dovecot   15   0  38468 2880 2008 S  0.0       0.3        0:49.31    pop3-login
5418   dovecot   16   0  38336 2700 2020 S  0.0       0.3        0:01.15    imap-login
5419   dovecot   15   0  38484 2856 2020 S  0.0       0.3        0:04.69    imap-login
9745   root        18   0  71548  22m 1400 S  0.0       2.2        0:01.39    lfd
11501  root        15   0  160m  67m 2824 S   0.0       6.6        1:32.51   spamd
23935  firewall   18   0  15276 1180  980 S   0.0        0.1        0:00.00   imap
23948  mailnull  15   0  64292 3300 2620 S   0.0       0.3        0:05.62   exim
23993  root       15   0  141m  49m 2760 S   0.0       4.8         1:00.87   spamd
24477  root       18   0  37480 6464 1372 S   0.0       0.6        0:04.17   queueprocd
24494  root       18   0  44524 8028 2200 S  0.0        0.8        1:20.86   tailwatchd
24526  root       19   0  92984  14m 1820 S  0.0       1.4         0:00.00   cpdavd
24536  root       33  18 23892 2556  680 S   0.0       0.2         0:02.09   cpanellogd
24543  root       18   0  87692  11m 1400 S  0.0       1.1         0:33.87   cpsrvd-ssl
25952  named    22  0 349m 8052 2076 S    0.0       0.8        20:17.42   named
26374  root       15  -4 12788  752  440 S    0.0       0.1         0:00.00   udevd
28031  root       17   0 48696 8232 2380 S   0.0       0.8         0:00.07   leechprotect
28038  root       18   0 71992 2172  132 S   0.0       0.2         0:00.00   httpd
28524  root       18   0 90944 3304 2584 S  0.0       0.3         0:00.01   sshd

For a graphical display of how the memory is being utilized, the Gnome System Monitor gives a detailed picture. There are other system monitors available under various window managers in Linux.

WHAT IS YOUR CPU DOING?

You may have a single, a dual core, or a quad core CPU in your system. To see what each CPU is doing or how two CPUs are sharing the load, you have to use ‘top’ or ‘htop’. These command line applications show the percentage of each CPU being utilized. You can also see process statistics, memory utilization, uptime, load average, CPU status, process counts, and memory and swap space utilization statistics.

Similar output statistics may be seen by using command line tools such as the ‘mpstat’, which is part of a group package called ‘sysstat’. You may have to install ‘sysstat’ in your system, since it may not be installed by default. Once installed, you can monitor a variety of parameters, for example compare the CPU utilization of an SMP system or multi-processor system.

Finding out if any specific process is hogging the CPU needs a little more command line instruction such as:

$ ps -eo pcpu,pid,user,args | sort -r -k1 | less

OR

$ ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10

Similar output can be obtained by using the command ‘iostat’ as root:

root@gateway [~]# iostat -xtc 5 3
Linux 2.6.18-028stab094.3 (gateway.firewall.cx)         01/11/2012

Time: 01:13:15 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
2.38    0.01     0.43          0.46      0.00      96.72

Time: 01:13:20 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
3.89    0.00     0.26          0.09      0.00      95.77

Time: 01:13:25 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
0.31    0.00    0.15           1.07     0.00       98.47

This will show three outputs every five seconds and show the information since the last reboot.

CPU usage under GUI is very well depicted by the Gnome System Monitor and other system monitoring applications. These are also useful for monitoring remote servers. Detailed memory maps can be accessed, signals can be sent and processes controlled remotely.

linux-system-monitoring-1

Gnome-System-Monitor

WHAT’S COOKING?

How do you know what processes are currently running in your Linux system? There are innumerable ways of getting to see this information. The handiest applications are the old faithfuls – ‘top’ and ‘htop’. They will give a real-time image of what is going on under the hood. However, if you prefer a more static view, use ‘ps’. To see all processes try ‘ps -A’ or ‘ps -e’:

root@gateway [~]# ps -e
PID TTY          TIME CMD
1 ?          00:01:41 init
3201 ?        00:00:00 leechprotect
3208 ?        00:00:00 httpd
3360 ?        00:00:00 httpd
3490 ?        00:00:00 httpd
3530 ?        00:00:00 httpd
3532 ?        00:00:00 httpd
3533 ?        00:00:00 httpd
3535 ?        00:00:00 httpd
3575 ?        00:00:00 httpd
3576 ?        00:00:00 httpd
3631 ?        00:00:00 imap
3694 ?        00:00:00 httpd
3705 ?        00:00:00 httpd
3770 ?        00:00:00 imap
3774 pts/0    00:00:00 ps
5407 ?        00:00:13 dovecot
5408 ?        00:00:12 dovecot-auth
5416 ?        00:00:10 pop3-login
5417 ?        00:00:49 pop3-login
5418 ?        00:00:01 imap-login
5419 ?        00:00:04 imap-login
9745 ?        00:00:01 lfd
11501 ?        00:01:35 spamd
23948 ?        00:00:05 exim
23993 ?        00:01:00 spamd
24477 ?        00:00:04 queueprocd
24494 ?        00:01:20 tailwatchd
24526 ?        00:00:00 cpdavd
24536 ?        00:00:02 cpanellogd
24543 ?        00:00:33 cpsrvd-ssl
25952 ?        00:20:17 named
26374 ?        00:00:00 udevd
28524 ?        00:00:00 sshd
28531 pts/0    00:00:00 bash
29834 ?        00:00:00 sshd
30426 ?        00:11:27 syslogd
30429 ?        00:00:00 klogd
30473 ?        00:00:00 xinetd
30485 ?        00:00:00 mysqld_safe
30549 ?        1-15:07:28 mysqld
32158 ?        00:06:29 httpd
32166 ?        00:12:39 pure-ftpd
32168 ?        00:07:12 pure-authd
32181 ?        00:01:06 crond
32368 ?        00:00:00 saslauthd
32373 ?        00:00:00 saslauthd

PS is an extremely powerful and versatile command, and you can learn more by ‘ps –h’:
root@gateway [~]# ps –h
********* simple selection *********  ********* selection by list *********
-A all processes                                   -C by command name
-N negate selection                              -G by real group ID (supports names)
-a all w/ tty except session leaders        -U by real user ID (supports names)
-d all except session leaders                  -g by session OR by effective group name
-e all processes                                    -p by process ID
T  all processes on this terminal             -s processes in the sessions given
a  all w/ tty, including other users           -t by tty
g  OBSOLETE — DO NOT USE                -u by effective user ID (supports names)
r  only running processes                      U  processes for specified users
x  processes w/o controlling ttys            t  by tty
*********** output format **********  *********** long options ***********
-o,o user-defined   -f full                        –Group –User –pid –cols –ppid
-j,j job control       s  signal                    –group –user –sid –rows –info
-O,O preloaded    -o  v  virtual memory  –cumulative –format –deselect
-l,l long                u  user-oriented         –sort –tty –forest –version
-F   extra full        X  registers                –heading –no-heading –context
********* misc options *********
-V,V  show version        L  list format codes        f  ASCII art forest
-m,m,-L,-T,H  threads   S  children in sum         -y change -l format
-M,Z  security data       c  true command name  -c scheduling class
-w,w  wide output         n  numeric WCHAN,UID  -H process hierarchy

Setup And Configuring Linux Samba (SMB) For Linux To Windows File Sharing


Resource sharing, like file systems and printers, in Microsoft Windows systems, is accomplished using a protocol called the Server Message Block or SMB. For working with such shared resources over a network consisting of Windows systems, an RHEL system must support SMB. The technology used for this is called SAMBA. This provides integration between the Windows and Linux systems. In addition, this is used to provide folder sharing between Linux systems. There are two parts to SAMBA, a Samba Server and a Samba Client.

When an RHEL system accesses resources on a Windows system, it does so using the Samba Client. An RHEL system, by default, has the Samba Client installed.

When an RHEL system serves resources to a Windows system, it uses the package Samba Server or simply Samba. This is not installed by default and has to be exclusively set up.

INSTALLING SAMBA ON LINUX REDHAT/CENTOS

Whether Samba is already installed on your RHEL, Fedora or CentOS setup, it can be tested with the following command:”

$ rpm -q samba

The result could be – “package samba is not installed,” or something like “samba-3.5.4-68.el6_0.1.x86_64” showing the version of Samba present on the system.

To install Samba, you will need to become root with the following command (give the root password, when prompted):

$ su –       

Then use Yum to install the Linux Samba package:

# yum install samba

This will install the samba package and its dependency package, samba-common.

Before you begin to use or configure Samba, the Linux Firewall (iptables) has to be configured to allow Samba traffic. From the command-line, this is achieved with the use of the following command:

# firewall-cmd –enable –service=samba

CONFIGURING LINUX SAMBA

The Samba configuration is meant to join an RHEL, Fedora or CentOS system to a Windows Workgroup and setting up a directory on the RHEL system, to act as a shared resource that can be accessed by authenticated Windows users.

To start with, you must gain root privileges with (give the root password, when prompted):

$ su –

Edit the Samba configuration file:

# vi /etc/samba/smb.conf

THE SMB.CONF [GLOBAL] SECTION

An smb.conf file is divided into several sections. the [global] section, which is the first section, has settings that apply to the entire Samba configuration. However, settings in the other sections in the configuration file may override the global settings.

To begin with, set the workgroup, which by default is set as “MYGROUP”:

workgroup = MYGROUP

Since most Windows networks are named WORKGROUP by default, the settings have to be changed as:

workgroup = workgroup

CONFIGURE THE SHARED RESOURCE

In the next step, a shared resource that will be accessible from the other systems on the Windows network has to be configured. This section has to be given a name by which it will be referred to when shared. For our example, let’s assume you would like share a directory on your Linux system located at /data/network-applications.  You’ll need to entitle the entire section as [NetApps] as shown below in oursmb.conf file:

[NetApps]       

path = /data/network-applications

writeable = yes
browseable = yes
valid users = administrator

When a Windows user browses to the Linux Server, they’ll see a network share labeled
“NetApps”.

This concludes the changes to the Samba configuration file.

CREATE A SAMBA USER

Any user wanting to access any Samba shared resource must be configured as a Samba User and assigned a password. This is achieved using the smbpasswd  command as a root user. Since you have defined “administrator” as the user who is entitled to access the “/data/network-applications” directory of the RHEL system, you have to add “administrator” as a Samba user.

You must gain root privileges with the following command (give the root password, when prompted):

$ su –

Add “administrator” as a Windows user –

# smbpasswd -a administrator

The system will respond with

New SMB password: <Enter password>
Retype new SMB password: <Retype password>

This will result into the following message:

Added user administrator

It will also be necessary to add the same account as a simple linux user, using the same password we used for the samba user:

# adduser administrator
# passwd administrator
Changing password for user administrator
New UNIX password: ********
Retype new UNIX password: ********
passwd: all authentication tokens updated successfully.

Now it is time to test the samba configuration file for any errors. For this you can use the command line tool “testparm” as root:

# testparm
Load smb config files from /etc/samba/smb.conf
Rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)Processing section “[NetApps]”

Loaded services file OK.

Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

If you would like to ensure that Windows users are automatically authenticated to your Samba share, without prompting for a username/password, all that’s needed is to add the samba user and password exactly as you Windows clients usernames and password. When a Windows system accesses a Samba share, it will automatically try to log in using the same credentials as the user logged into the Windows system.

STARTING SAMBA AND NETBIOS NAME SERVICE ON RHEL

The Samba and NetBios Nameservice or NMB services have to be enabled and then started for them to take effect:

# systemctl enable smb.service

# systemctl start smb.service
# systemctl enable nmb.service
# systemctl start nmb.service

In case the services were already running, you may have to restart them again:

# systemctl restart smb.service
# systemctl restart nmb.service

If you are not using systemctl command, you can alternatively start the Samba using a more classic way:

[root@gateway] service smb start
Starting SMB services:  [OK]

To configure your Linux system to automatically start the Samba service upon boot up, the above command will need to be inserted in the/etc/rc.local file. For more information about this, you can read our popular Linux Init Process & Different run levels article

 

ACCESSING THE SAMBA SHARES FROM WINDOWS

Now that you have configured the Samba resources and the services are running, they can be tested for sharing from a Windows system. For this, open the Windows Explorer and navigate to the Network page. Windows should show the RHEL system. If you double-click on the RHEL icon, you will be prompted for the username and password. The username to be entered now is “administrator” with the password that was assigned.

Again, if you are logged on your Windows workstation using the same account and password as that of the Samba service (e.g Administrator), you will not be prompted for any authentication as the Windows  operating system will automatically authenticate to the RHEL Samba service using these credentials.

ACCESSING WINDOWS SHARES FROM RHEL WORKSTATION OR SERVER

To access Windows shares from your RHEL system, the package samba-client may have to be installed, unless it is installed by default. For this you must gain root privileges with (give the root password, when prompted):

$ su – 

Install samba-client using the following commands:

# yum install samba-client

To see any shared resource on the Windows system and to access it, you can go to Places > Network. Clicking on the Windows Network icon will open up the list of workgroups available for access.

Implementing Virtual Servers And Load Balancing Cluster System With Linux


WHAT IS SERVER VIRTUALIZATION?

Server virtualization is the process of apportioning a physical server into several smaller virtual servers. During server virtualization, the resources of the server itself remain hidden. In fact, the resources are masked from users, and software is used for dividing the physical server into multiple virtual machines or environments, called virtual or private servers.

This technology is commonly used in Web servers. Virtual Web servers provide a very simple and popular way of offering low-cost web hosting services. Instead of using a separate computer for each server, dozens of virtual servers can co-exist on the same computer.

There are many benefits of server virtualization. For example, it allows each virtual server to run its own operating system. Each virtual server can be independently rebooted without disturbing the others. Because several servers run on the same hardware, less hardware is required for server virtualization, which saves a lot of money for the business. Since the process utilizes resources to the fullest, it saves on operational costs. Using a lower number of physical servers also reduces hardware maintenance.

In most cases, the customer does not observe any performance deficit and each web site behaves as if it is being served by a dedicated server. However, the resources of the computer being shared, if a large number of virtual servers reside on the same computer, or if one of the virtual servers starts to hog the resources, Web pages will be delivered more slowly.

There are several ways of creating virtual servers, with the most common being virtual machines, operating system-level virtualization, and paravirtual machines.

HOW ARE VIRTUAL SERVERS HELPFUL

The way Internet is exploding with information, it is playing an increasingly important role in our lives. Internet traffic is increasing dramatically, and has been growing at an annual rate of nearly 100%. The workload on the servers is simultaneously increasing significantly so that servers frequently become overloaded for short durations, especially for popular web sites.

To overcome the overloading problem of the servers, there are two solutions. You could have a single server solution, such as upgrading the server to a higher performance server. However, as requests increase, it will soon be overloaded, so that it has to be upgraded repeatedly. The upgrading process is complex and the cost is high.

The other is the multiple server solution, such as building a scalable network service system on a cluster of servers. As load increases, you can just add a new server or several new servers into the cluster to meet the increasing requests, and a virtual server running on commodity hardware offers the lowest cost to performance ratio. Therefore, for network services, the virtual server is a highly scalable and more cost-effective for building server cluster system.

VIRTUAL SERVERS WITH LINUX

Highly available server solutions are done by clustering. Cluster computing involves three distinct branches, of which two are addressed by RHEL or Red Hat Enterprise Linux:

Ø    Load balancing clusters using Linux Virtual Servers as specialized routing machines to dispatch traffic to a pool of servers.

Ø    Highly available or HA Clustering with Red Hat Cluster Manager that uses multiple machines to add an extra level of reliability for a group of services.

LOAD BALANCING CLUSTER SYSTEM USING RHEL VIRTUAL SERVERS

When you access a website or a database application, you do not know if you are accessing a single server or a group of servers. To you, the Linux Virtual Server or LVS cluster appears as a single server. In reality, there is a cluster of two or more servers behind a pair or redundant LVS routers. These routers distribute the client requests evenly throughout the cluster system.

Administrators use Red Hat Enterprise Linux and commodity hardware to address availability requirements, and to create consistent and continuous access to all hosted services.

In its simplest form, an LVS cluster consists of two layers. In the first layer are two similarly configured cluster members, which are Linux machines. One of these machines is the LVS router and is configured to direct the requests from the internet to the servers. The LVS router balances the load on the real servers, which form the second layer. The real servers provide the critical services to the end-user. The second Linux machine acts as a monitor to the active router and assumes its role in the event of a failure.

The active router directs traffic from the internet to the real servers by making use of Network Address Translation or NAT. The real servers are connected to a dedicated network segment transfer all public traffic via the active LVS router. The outside world sees this entire cluster arrangement as a single entity.

LVS WITH NAT ROUTING

The active LVS router has two Network Interface Cards or NICs. One of the NICs is connected to the Internet and has a real IP address on the eth0 and a floating IP address aliased to eth0:1. The other NIC connects to the private network with a real IP address on the eth1, and a floating address aliased to eth1:1.

All the servers of the cluster are located on the private network and use the floating IP for the NAT router. They communicate with the active LVS router via the floating IP as their default route. This ensures their abilities for responding to requests from the inernet are not impaired.

When requests are received by the active LVS router, it routes the request to an appropriate server. The real server processes the request and returns the packets to the LVS router. Using NAT, the LVS router then replaces the address of the real server in the packets with the public IP address of the LVS router. This process is called IP Masquerading, and it hides the IP addresses of the real servers from the requesting clients.

CONFIGURING LVS ROUTERS WITH THE PIRANHA CONFIGURATION TOOL

The configuration file for an LVS cluster follows strict formatting rules. To prevent server failures because of syntax errors in the file lvs.cf, using the Piranha Configuration Tool is highly recommended. This tool provides a structured approach to creating the necessary configuration file for a Piranha cluster. The configuration file is located at /etc/sysconfig/ha/lvs.cf, and the configuration can be done with a web-based tool such as the Apache HTTP Server.

As an example, we will use the following settings:

LVS Router 1: eth0: 192.168.26.201
LVS Router 2: eth0: 192.168.26.202

Real Server 1: eth0: 192.168.26.211

Real Server 2: eth0: 192.168.26.212

VIP: 192.168.26.200

Gateway: 192.168.26.1

You will need to install piranha and ipvsadm packages on the LVS Routers:

# yum install ipvsadm

# yum install piranha

Start services on the LVS Routers with:

# chkconfig pulse on
# chkconfig piranha-gui on

# chkconfig httpd on

Set a Password for the Piranha Configuration Tool using the following commands:  # piranha-passwd

Next, turn on Packet Forwarding on the LVS Routers with:

# echo 1 > /proc/sys/net/ipv4/ip_forward

STARTING THE PIRANHA CONFIGURATION TOOL SERVICE

First you’ll need to modify the mode SELinux in permissive mode with the use of the command:

# setenforce 0
# service httpd start

# service piranha-gui start

If this is not done, the system will most probably show the following error massage when the piranha-gui service is started:

Starting piranha-gui: (13)Permission denied: make_sock: could not bind to address [::]:3636

(13)Permission denied: make_sock: could not bind to address 0.0.0.0:3636
No listening sockets available, shutting down
Unable to open logs

CONFIGURE THE LVS ROUTERS WITH THE PIRANHA CONFIGURATION TOOL

The Piranha Configuration Tool runs on port 3636 by default. Open http://localhost:3626 or http://192.168.26.201:3636 in a Web browser to access the Piranha Configuration Tool. Click on the Login button and enter piranha for the Username and the administrative password you created, in the Password field:

linux-virtual-servers-1

Click on the GLOBAL SETTINGS panel, enter the primary server public IP, and click the ACCEPT button:

linux-virtual-servers-2

Click on the REDUNDANCY panel, enter the redundant server public IP, and click the ACCEPT button:

linux-virtual-servers-3

Click on the VIRTUAL SERVERS panel, add a server, edit it, and activate it:

linux-virtual-servers-4

linux-virtual-servers-5

Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. Click the ADDbutton to add new servers, edit them and activate them:

linux-virtual-servers-6

Copy the lvs.cf file to another LVS router:

# scp /etc/sysconfig/ha/lvs.cf root@192.168.26.202:/etc/sysconfig/ha/lvs.cf

Start the pulse services on the LVS Routers with the following command:

# service pulse restart

TESTING THE SYSTEM

You can use the Apache HTTP server benchmarking tool (ab) to simulate a visit by the user.

HA CLUSTERING WITH RED HAT CLUSTER MANAGER

When dealing with clusters, single point failures, unresponsive applications and nodes are some of the issues that increase the non-availability of the servers. Red Hat addresses these issues through their High Availability or HA Add-On servers. Centralised configurations and management are some of the best features of the Conga application of RHEL.

For delivering an extremely mature, high-performing, secure and lightweight high-availability server solution, RHEL implements the Totem Single Ring Ordering and Membership Protocol. Corosync is the cluster executive within the HA Add-On.

KERNEL-BASED VIRTUAL MACHINE TECHNOLOGY

RHEL uses the Linux kernel that has the virtualization characteristics built-in and makes use of the kernel-based virtual machine technology known as KVM. This makes RHEL perfectly suitable to run as either a host or a guest in any Enterprise Linux deployment. As a result, all Red Hat Enterprise Linux system management and security tools and certifications are part of the kernel and always available to the administrators, out of the box.

RHEL uses highly improved SCSI-3 PR reservations-based fencing. Fencing is the process for removing resources from the cluster node from being accessed when they have lost contact with the cluster. This prevents uncoordinated modification of shared storage thus protecting the resources.

Improvement in system flexibility and configuration is possible because RHEL allows manual specification of devices and keys for reservation and registration. Ordinarily, after fencing, the unconnected cluster mode would need to be rebooted to rejoin the cluster. RHEL unfencing makes it possible to re-enable access and startup of the node without administrative intervention.

IMPROVED CLUSTER CONFIGURATION

LDAP, the Lightweight Directory Access Protocol provides improved cluster configuration system for load options. This provides better manageability and usability across the cluster by easily configuring, validating and synchronizing the reload. Virtualized KVM guests can be run as managed services.

RHEL Web interface to the cluster management and administration runs on TurboGears2 and provides a rich graphical user interface. This enables unified logging and debugging by administrators who can enable, capture and read cluster system logs using a single cluster configuration command.

INSTALLING TURBOGEARS2

The method of installing TurboGears2 depends on the platform and the level of experience. It is recommended to install TurboGears2 withing a virtual enviroment as this will prevent interference with the system’s installed packages. Prerequisites for installation ofTurboGears2 are Python, Setuptools, Database and Drivers, Virtualenv, Virtualenvwrapper and other dependencies.

linux-virtual-servers-7

Configuring Linux To Act As A Firewall With Iptables


What exactly is a firewall? As in the non-computer world, a firewall acts as a physical barrier to prevent fires from spreading. In the computer world too, the firewall acts in a similar manner, only the fires that they prevent from spreading are the attacks, which crackers generate when the computer is on the Internet. Therefore, a firewall can also be called a packet filter, which sits between the computer and the Internet, controlling and regulating the information flow.

Most of the firewalls in use today are the filtering firewalls. They sit between the computer and the Internet and limit access to only specific computers on the network. It can also be programmed to limit the type of communication, and selectively permit or deny several Internet services.

Organizations receive their routable IP addresses from their ISPs. However, the number of IP addresses given is limited. Therefore, alternate ways of sharing the Internet services have to be found without every node on the LAN getting a public IP address. This is done commonly by using private IP addresses, so that all nodes are able to access properly both external and internal network services.

Firewalls are used for receiving incoming transmissions from the Internet and routing the packets to the intended nodes on the LAN. Similarly, firewalls are also used for routing outgoing requests from a node on the LAN to the remote Internet service.

This method of forwarding the network traffic may prove to be dangerous, when modern cracking tools can spoof the internal IP addresses and allow the remote attacker to act as a node on the LAN. In order to prevent this, the iptables provide routing and forwarding policies, which can be implemented for preventing abnormal usage of networking resources. For example, the FORWARD chain lets the administrator control where the packets are routed within a LAN.

LAN nodes can communicate with each other, and they can accept the forwarded packets from the  firewall, with their internal IP addresses. However, this does not give them the facility to communicate to the external world and to the Internet.

For allowing the LAN nodes that have private IP addresses to communicate with the outside world, the firewall has to be configured for IP masquerading. The requests that LAN nodes make, are then masked with the IP addresses of the firewall’s external device, such as eth0.

HOW IPTABLES CAN BE USED TO CONFIGURE YOUR FIREWALL

Whenever a packet arrives at the firewall, it will be either processed or disregarded. The disregarded packets would normally be those, which are malformed in some way or are invalid in some technical way. Based on the packet activity of those that are processed, the packets are enqueued in one of the three builtin ‘tables.’ The first table is the mangle table. This alters the service bits in the TCP header. The second table is the filter queue, which takes care of the actual filtering of the packets. This consists of three chains, and you can place your firewall policy rules in these chains (shown in the diagram below):

Forward chain: It filters the packets to be forwarded to networks protected by the firewall.

Input chain: It filters the packets arriving at the firewall.

Output chain: It filters the packets leaving the firewall.

The third table is the NAT table. This is where the Network Address Translation or NAT is performed. There are two built-in chains in this:

Pre-routing chain: It NATs the packets whose destination address needs to be changed.

Post-routing chain: It NATs the packets whose source address needs to be changed.

Whenever a rule is set, the table it belongs has to be specified. The ‘Filter’ table is the only exception. This is because most of the ‘iptables’ rules are the filter rules. Therefore, the filter table is the default table.

The diagram below shows the flow of packets within the filter table. Packets entering the Linux system follow a specific logical path and decisions are made backed on their characteristics.  The path shown below is independent of the network interface they are entering or exiting:

The Filter Queue Table

linux-ip-filter-table

Each of the chains filters data packets based on:

  • Source and Destination IP Address
  • Source and Destination Port number
  • Network interface (eth0, eth1 etc)
  • State of the packet

Target for the rule: ACCEPT, DROP, REJECT, QUEUE, RETURN and LOG

As mentioned previously, the table of NAT rules consists mainly of two chains: each rule is examined in order until one matches. The two chains are called PREROUTING (for Destination NAT, as packets first come in), and POSTROUTING (for Source NAT, as packets leave).

The NAT Table

linux-nat-table

At each of the points above, when a packet passes we look up what connection it is associated with. If it’s a new connection, we look up the corresponding chain in the NAT table to see what to do with it. The answer it gives will apply to all future packets on that connection.

The most important option here is the table selection option, `-t’. For all NAT operations, you will want to use `-t nat‘ for the NAT table. The second most important option to use is `-A’ to append a new rule at the end of the chain (e.g. `-A POSTROUTING’), or `-I’ to insert one at the beginning (e.g. `-I PREROUTING’).

The following command enables NAT for all outgoing packets. Eth0 is our WAN interface:

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

If you rather implement static NAT, mapping an internal host to a public IP, here’s what the command would look like:

# iptables -A POSTROUTING -t nat -s 192.168.0.3 -o eth0 -d 0/0 -j SNAT –to 203.18.45.12

With the above command, all outgoing packets sent from internal IP 192.168.0.3 are mapped to external IP 203.18.45.12.

Taking it the other way around, the command below is used to enable port forwarding from the WAN interface, to an internal host. Any incoming packets on our external interface (eth0) with a destination port (dport) of 80, are forwarded to an internal host (192.168.0.5), port 80:

# iptables -t nat -A PREROUTING -p tcp -i eth0 –dport 80 -j DNAT –to 192.168.0.5:80

HOW THE FORWARD CHAIN ALLOWS PACKET FORWARDING

Packet forwarding within a LAN is controlled by the FORWARD chain in the iptables firewall. If the firewall is assigned an internal IP address eth2 and an external IP address on eth0,  the rules to be used to allow the forwarding to be done for the entire LAN would be:

# iptables -A FORWARD -i eth2 -j ACCEPT
# iptables -A FORWARD -o eth0 -j ACCEPT

This way, Firewall gets access to the nodes of the LAN that have internal IP address. The packets enter through the eth2 device of the gateway. They are then routed from one LAN node to their intended destination nodes.

DYNAMIC FIREWALL

By default, the IPv4 policy in Fedora kernels disables support for IP forwarding. This prevents machines that run Fedora from functioning as a dedicated firewall. Furthermore, starting with Fedora 16, the default firewall solution is now provided by “firewalld”. Although it is claimed to be the default, Fedora 16 still ships with the traditional firewall iptables. To enable the dynamic firewall in Fedora, you will need to disable the traditional firewall and install the new dynamic firewalld. The main difference between the two is firewalld is smarter in the sense it does not have to be stopped and restarted each time a policy decision is changed, unlike the traditional firewall.

To disable the traditional firewall, there are two methods, graphical and command line. For the graphical method, the GUI for the System-Config- Firewall can be opened from the Applications menu > Other > Firewall. The firewall can now be disabled.

For the command line, following commands will be needed:

# systemctl stop iptables.service
# systemctl stop ip6tables.service

To remove iptables entirely from system:

# systemctl disable iptables.service
rm ‘/etc/systemd/system/basic.target.wants/iptables.service’

# systemctl disable ip6tables.service

rm ‘/etc/systemd/system/basic.target.wants/ip6tables.service’

For installing Firewalld, you can use Yum:

# yum install firewalld firewall-applet

To enable and then start Firewalld you will need the following commands:

# systemctl enable firewalld.service
# systemctl start firewalld.service

The firewall-applet can be started from Applications menu > Other > Firewall Applet

When you hover the mouse over the firewall applet on the top panel, you can see the ports, services, etc. that are enabled. By clicking on the applet, the different services can be started or stopped. However, if you change the status and the applet crashes in order to regain control, you will have to kill the applet by using the following commands:

# ps -A | grep firewall*

Which will tell you the PID of the running applet, and you can kill it with the following command:

# kill -9 <pid>

A restart of the applet can be done from the Applications menu, and now the service you had enabled will be visible.

To get around this, the command line option can be used:

Use firewall-cmd to enable, for example ssh: # firewall-cmd –enable –service=ssh
Enable samba for 10 seconds: Enable samba for 10 seconds: # firewall-cmd –enable –service=samba –timeout=10
Enable ipp-client: # firewall-cmd –enable –service=ipp-client
Disable ipp-client: # firewall-cmd –disable –service=ipp-client
To restore the static firewall with lokkit again simply use (after stopping and disabling Firewalld): # lokkit –enabled

Installing and Configuring VSFTPD FTP Server For RedHat Enterprise Linux, CentOS and Fedora


Vsftpd is a popular FTP server for Unix/Linux systems. For thoes unaware of the vsftpd ftp server, note that this is not just another ftp server, but a mature product that has been around for over 12 years in the Unix world. While Vsftpd it is found as an installation option on many Linux distributions, it is not often Linux system administrators are seeking for installation and configuration instructions for it, which is the reason we decide to cover it on Firewall.cx.

This article focuses on the installation and setup of the Vsftpd service on Linux Redhat Enterprise, Fedora and CentOS, however it is applicable to almost all other Linux distributions.  We’ll also take a look at a number of great tips which include setting quotas, restricting access to anonymous users, disabling uploads, setting a dedicated partition for the FTP service, configuring the system’s IPTable firewall and much more.

VSFTPD FEATURES

Following is a list of vsftpd’s features which confirms this small FTP package is capable of delivering a lot more than most FTP servers out there:

  • Virtual IP configurations
  • Virtual users
  • Standalone or inetd operation
  • Powerful per-user configurability
  • Bandwidth throttling
  • Per-source-IP configurability
  • Per-source-IP limits
  • IPv6
  • Encryption support through SSL integration
  • and much more….!

INSTALLING THE VSFTPD LINUX SERVER

To initiate the installation of the vsftpd package, simply open your CLI prompt and use the yum command (you need root privileges) as shown below:

# yum install vsftpd

Yum will automatically locate, download and install the latest vsftpd version.

CONFIGURE VSFTPD SERVER

To open the configuration file, type:

# vi /etc/vsftpd/vsftpd.conf

Turn off standard ftpd xferlog log format and turn on verbose vsftpd log format by making the following changes in the vsftpd.conf file:

xferlog_std_format=NO
log_ftp_protocol=YES
Note: the default vsftpd log file is /var/log/vsftpd.log.

Above two directives will enable logging of all FTP transactions.

To lock down users to their home directories:

chroot_local_user=YES

You can create warning banners for all FTP users, by defining the path:

banner_file=/etc/vsftpd/issue

Now you can create the /etc/vsftpd/issue file with a message compliant with the local site policy or a legal disclaimer:

“NOTICE TO USERS – Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address”.

TURN ON VFSTPD SERVICE

Turn on vsftpd on boot:

# systemctl enable vsftpd@.service

Start the service:

# systemctl start vsftpd@vsftpd.service

You can verify the service is running and listening on the correct port using the following command:

# netstat -tulpn | grep :21

Here’s the expected output:

tcp        0      0 0.0.0.0:21              0.0.0.0:*               LISTEN      LISTEN 9734/vsftpd

CONFIGURE IPTABLES TO PROTECT THE FTP SERVER

In case IPTables are configured on the system, it will be necessary to edit the iptables file and open the ports used by FTP to ensure the service’s operation.

To open file /etc/sysconfig/iptables, enter:

# vi /etc/sysconfig/iptables

Add the following lines, ensuring that they appear before the final LOG and DROP lines for the RH-Firewall-1-INPUT:

-A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 21 -j ACCEPT

Next, open file /etc/sysconfig/iptables-config, and enter:

# vi /etc/sysconfig/iptables-config

Ensure that the space-separated list of modules contains the FTP connection-tracking module:

IPTABLES_MODULES=”ip_conntrack_ftp”

Save and close the file and finally restart the firewall using the following commands:

# systemctl restart iptables.service
# systemctl restart ip6tables.service

 

TIP: VIEW FTP LOG FILE

Type the following command:

# tail -f /var/log/vsftpd.log

TIP: RESTRICTING ACCESS TO ANONYMOUS USER ONLY

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

local_enable=NO

TIP: TO DISABLE FTP UPLOADS

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

write_enable=NO

TIP: TO ENABLE DISK QUOTA

Disk quota must be enabled to prevent users from filling a disk used by FTP upload services. Edit the vsftpd configuration file. Add or correct the following configuration options to represents a directory which vsftpd will try to change into after an anonymous login:

anon_root=/ftp/ftp/pub

The ftp users are the same users as those on the hosting machine.

You could have a separate group for ftp users, to help keep their privileges down (for example ‘anonftpusers’). Knowing that, your script should do:

useradd -d /www/htdocs/hosted/bob -g anonftpusers -s /sbin/nologin bob

echo bobspassword | passwd –stdin bob
echo bob >> /etc/vsftpd/user_list

Be extremely careful with your scripts, as they will have to be run as root.

However, for this to work you will have to have the following options enabled in /etc/vsftpd/vsftpd.conf:

userlist_enable=YES
userlist_deny=NO

SECURITY TIP: PLACE THE FTP DIRECTORY ON ITS OWN PARTITION

Separation of the operating system files from FTP users files may result into a better and secure system. Restricting the growth of certain file systems is possible using various techniques. For example, use /ftp partition to store all ftp home directories and mount ftp with nosuid, nodev and noexec options. A sample /etc/fstab entry:

/dev/sda5  /ftp          ext3    defaults,nosuid,nodev,noexec,usrquota 1 2

 

EXAMPLE FILE FOR VSFTPD.CONF

Following is an example for vsftpd.conf. It allows the users listed in the user_list file to log in, no anonymous users, and quite tight restrictions on what users can do:

# Allow anonymous FTP?

anonymous_enable=NO
#
# Allow local users to log in?
local_enable=YES
#
# Allow any form of FTP write command.
write_enable=YES
#
# To make files uploaded by your users writable by only
# themselves, but readable by everyone and if, through some
# misconfiguration, an anonymous user manages to upload a file, # the file will have no read, write or execute permission. Just to be # safe.
local_umask=0000
file_open_mode=0644
anon_umask=0777
#
# Allow the anonymous FTP user to upload files?
anon_upload_enable=NO
#
# Activate directory messages – messages given to remote users when they
# go into a certain directory.
dirmessage_enable=NO
#
# Activate logging of uploads/downloads?
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data)?
connect_from_port_20=YES
#
# Log file in standard ftpd xferlog format?
xferlog_std_format=NO
#
# User for vsftpd to run as?
nopriv_user=vsftpd
#
# Login banner string:
ftpd_banner= NOTICE TO USERS – Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address.
#
# chroot local users (only allow users to see their directory)?
chroot_local_user=YES
#
# PAM service name?
pam_service_name=vsftpd
#
# Enable user_list (see next option)?
userlist_enable=YES
#
# Should the user_list file specify users to deny(=YES) or to allow(=NO)
userlist_deny=NO
#
# Standalone (not run through xinetd) listen mode?
listen=YES
#
#
tcp_wrappers=NO
#
# Log all ftp actions (not just transfers)?
log_ftp_protocol=YES
# Initially YES for trouble shooting, later change to NO
#
# Show file ownership as ftp:ftp instead of real users?
hide_ids=YES
#
# Allow ftp users to change permissions of files?
chmod_enable=NO
#
# Use local time?
use_localtime=YES
#
# List of raw FTP commands, which are allowed (some commands may be a security hazard):
cmds_allowed=ABOR,QUIT,LIST,PASV,RETR,CWD,STOR,TYPE,PWD,SIZE,NLST,PORT,SYST,PRET,MDTM,DEL,MKD,RMD

With this config, uploaded files are not readable or executable by anyone, so the server is acting as a ‘dropbox‘. Change thefile_open_modeoption to change that.

Lastly, it is also advised to have a look at ‘man vsftpd.conf‘ for a full list and description of all options

Installation and configuration of Linux DHCP Server


For a cable modem or a DSL connection, the service provider dynamically assigns the IP address to your PC. When you install a DSL or a home cable router between your home network and your modem, your PC will get its IP address from the home router during boot up. A Linux system can be set up as a DHCP server and used in place of the router.

DHCP is not installed by default on your Linux system. It has to be installed by gaining root privileges:

$ su –

You will be prompted for the root password and you can install DHCP by the command:

# yum install dhcp

Once all the dependencies are satisfied, the installation will complete.

START THE DHCP SERVER

You will need root privileges for enabling, starting, stopping or restarting the dhcpd service:

# systemctl enable dhcpd.service

Once enabled, the dhcpd services can be started, stopped and restarted with:

# systemctl start dhcpd.service
# systemctl stop dhcpd.service
# systemctl restart dhcpd.service

or with the use of the following commands if systemctl command is not available:

# service dhcpd start
# service dhcpd stop
# service dhcpd restart

To determine whether dhcpd is running on your system, you can seek its status:

# systemctl status dhcpd.service

Another way of knowing if dhcpd is running is to use the ‘service‘ command:

# service dhcpd status

Note that dhcpd has to be configured to start automatically on next reboot.

CONFIGURING THE LINUX DHCP SERVER

Depending on the version of the Linux installation you are currently running, the configuration file may reside either in /etc/dhcpd or/etc/dhcpd3 directories.

When you install the DHCP package, a skeleton configuration file and a sample configuration file are created. Both are quite extensive, and the skeleton configuration file has most of its commands deactivated with # at the beginning. The sample configuration file can be found in the location /usr/share/doc/dhcp*/dhcpd.conf.sample.

When the dhcpd.conf file is created, a subnet section is generated for each of the interfaces present on your Linux system; this is very important. Following is a small part of the dhcp.conf file:

ddns-update-style interimignore client-updates

subnet 192.168.1.0 netmask 255.255.255.0 {

# The range of IP addresses the server

# will issue to DHCP enabled PC clients

# booting up on the network

range 192.168.1.201 192.168.1.220;

# Set the amount of time in seconds that

# a client may keep the IP address

default-lease-time 86400;

max-lease-time 86400;

# Set the default gateway to be used by

# the PC clients

option routers 192.168.1.1;

# Don’t forward DHCP requests from this

# NIC interface to any other NIC

# interfaces

option ip-forwarding off;

# Set the broadcast address and subnet mask

# to be used by the DHCP clients

option broadcast-address 192.168.1.255;

option subnet-mask 255.255.255.0;

# Set the NTP server to be used by the

# DHCP clients

option ntp-servers 192.168.1.100;

# Set the DNS server to be used by the

# DHCP clients

option domain-name-servers 192.168.1.100;

# If you specify a WINS server for your Windows clients,

# you need to include the following option in the dhcpd.conf file:

option netbios-name-servers 192.168.1.100;

# You can also assign specific IP addresses based on the clients’

# ethernet MAC address as follows (Host’s name is “laser-printer”:

host laser-printer {

hardware ethernet 08:00:2b:4c:59:23;

fixed-address 192.168.1.222;

}

}

#

# List an unused interface here

#

subnet 192.168.2.0 netmask 255.255.255.0 {

}

The IP addresses will need to be changed to meet the ranges suitable to your network. There are other option statements that can be used to configure the DHCP. As you can see, some of the resources such as printers, which need fixed IP addresses, are given the specific IP address based on the NIC MAC address of the device.

For more information, you may read the relevant man pages:

# man dhcp-options

ROUTING WITH A DHCP SERVER

When a PC with DHCP configuration boots, it requests for the IP address from the DHCP server. For this, it sends a standard DHCP request packet to the DHCP server with a source IP address of 255.255.255.255. A route has to be added to this 255.255.255.255 address so that the DHCP server knows on which interface it has to send the reply. This is done by adding the route information to the/etc/sysconfig/network-scripts/route-eth0 file, assuming the route is to be added to the eth0 interface:

#

# File /etc/sysconfig/network-scripts/route-eth0
#
255.255.255.255/32 dev eth0

After defining the interface for the DHCP routing, it has to be further ensured that your DHCP server listens only to that interface and to no other. For this the /etc/sysconfig/dhcpd file has to be edited and the preferred interface added to the DHCPDARGS variable. If the interface is to be eth0 following are the changes that need to be made:

# File: /etc/sysconfig/dhcpd

DHCPDARGS=eth0

TESTING THE DHCP

Using the netstat command along with the -au option will show the list of interfaces listening on the bootp or DHCP UDP port:

# netstat -au  | grep bootp

will result in the following:

udp     0         0 192.168.1.100:bootps         *:*

Additionally, a check on the /var/log/messages file will show the defined interfaces used from the time the dhcpd daemon was started:

Feb  24 17:22:44 Linux-64 dhcpd: Listening on LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24
Feb  24 17:22:44 Linux-64 dhcpd: Sending on  LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24

This confirms the DHCP Service has been installed with success and operating correctly

Complete Guide on, Linux Bind DNS



LINUX BIND DNS – INTRODUCTION TO THE DNS DATABASE (BIND)


BIND (Berkely Internet Name Domain) is a popular software for translating domain names into IP addresses and usually found on Linux servers. This article will explain the basic concepts of DNS BIND and analyse the associated files required to successfully setup your own DNS BIND server. After reading this article, you will be able to successfully install and setup a Linux BIND DNS server for your network.

ZONES AND DOMAINS

The programs that store information about the domain name space are called name servers, as you probably already know. Name Servers generally have complete information about some part of the domain name space (a zone), which they load from a file. The name server is then said to have authority for that zone.

The term zone is not one that you come across every day while you’re surfing on the Internet. We tend to think that the domain concept is all there is when it comes to DNS, which makes life easy for us, but when dealing with DNS servers that hold data for our domains (name servers), then we need to introduce the zone term since it is essential so we can understand the setup of a DNS server.

The difference between a zone and a domain is important, but subtle. The best way to understand the difference is by using a good example, which is coming up next.

The COM domain is divided into many zones, including the hp.com zone, sun.com, it.com. At the top of the domain, there is also a com zone.

The diagram below shows you how a zone fits within a domain:

 dns-bind-intro-1

The trick to understanding how it works is to remember that a zone exists “inside” a domain. Name servers load zone files, not domains.Zone files contain information about the portion of a domain for which they are responsible. This could be the whole domain (sun.com,it.com) or simply a portion of it (hp.com + pr.hp.com).

In our example, the hp.com domain has two subdomains, support.hp.com and pr.hp.com. The first one, support.hp.com is controlled by its own name servers as it has its own zone, called the support.hp.com zone. The second one though, pr.hp.com is controlled by the same name server that takes care of the hp.com zone.

The hp.com zone has very little information about the support.hp.com zone, it simply knows its right below. If anyone requires more information on support.hp.com, it will be requested to contact the authoritative name servsers for that subdomain, which are the name servers for that zone.

So you see that even though support.hp.com is a subdomain just like pr.hp.com, it is not setup and controlled the same way as pr.hp.com.

On the other hand, the Sun.com domain has one zone (sun.com zone) that contains and controlls the whole domain. This zone is loaded by the authoritative name servers.

BIND? NEVER HEARD OF IT !

As mentioned in the beginning of this article, BIND stands for Berkely Internet Name Domain. Keeping things simple, it’s a program you download (www.bind.org) and install on your Unix or Linux server to give it the ability to become a DNS server for your private (lan) or public (Internet) network.

The majority of DNS servers are based on BIND as it’s a proven and reliable DNS server. The download is approximately 4.8 MBytes. Untarring and compiling BIND is a pretty straight forward process and the steps required will depend on your Linux distribution and version. If you follow the instructions provided with the download, you shouldn’t have any problems.  For simplicity purposes, we assume you’ve compiled and installed the BIND program using the provided instructions.

SETTING UP YOUR ZONE DATA

No matter what Linux distribution you have, the file structure is pretty much the same. I have BIND installed on my Linux server, which runs Slackware v8 with kernel 2.4.19. By following the installation procedure found in the documentation provided with BIND, you will have the server installed within 15 min at most.

Once the installation of BIND is complete you need to start creating your zone data files. Remember, these are the files the DNS server will load in order to understand how your domain is setup and the various hosts within it.

A DNS server has multiple files that contain information about the domain setup. From these files, one will map all host names to IP Addresses and other files will map the IP Address back to hostnames. The name-to-IP Address lookup is sometimes called forward mapping and the IP Address-to-name lookup reverse mapping. Each network will have its own file for reverse-mapping.

As a convention in this section, a file that maps hostnames to IP Addresses will be called db.DOMAIN, where DOMAIN is the name of your domain e.g. db.firewall.cx, and db is short for DataBase.The files mapping IP Address to hostnames are called db.ADDR whereADDR is the network number without trailing zeros or the specification of a netmask, e.g db.192.168.0 for the 192.168.0.0 network.

The collection of our db.DOMAIN and db.ADDR files are our Zone Data files. There are a few other zone data files, some of which are created during the installation of BIND: named.ca, localhost.zone and named.local.

Named.ca contains information about the root servers on the Internet, should your DNS server require to contact one of them. Localhost.zone and Named.local are there to cover the loopback network. The loopback address is a special address hosts use to direct traffic to themselves. This is usually IP Address 127.0.0.1, which belongs to the 127.0.0.0/24 network.

These files must be present in each DNS server and are the same for every DNS server.

QUICK SUMMARY OF FILES SO FAR..

Let’s have a quick look at the files we have covered so far to make sure we don’t lose track:

1) Following files must be created by you and will contain the data for our zone:

  • db.DOMAIN e.g db.space.net – Host to IP Address mapping
  • db.ADDR e.g db.192.168.0 – IP Address to Host mapping

2) Following files are usually created by the BIND installation:

  • named.ca – Contains the ROOT DNS servers
  • named.local & localhost.zone – Special files so the server can direct traffic to itself.

You should also be aware that the file names can change, there is no standard for names, it’s just very convenient and tidy to keep some type of convention.

To tie all the zone data files together a name server needs a configuration file. BIND version 8 and above calls it named.conf and it can be found in your /etc dir once you install the BIND package. Named.conf simply tells the name server where your zone files are located and we will be analysing this file later on.

Most entries in the zone data files are called DNS resource records. Since DNS lookups are case insensitive, you can enter names in your zone data files in uppercase, lowercase or mixed case. I tend to use lowercase.

Resource records must start in the first column of a line. The DNS RFCs have samples that present the order in which one should enter the resource records. Some people choose to follow this order, while others don’t. You are not required to follow this order, but I do :)

Here is the order of resource records in the zone data file:

SOA record – Indicates authority for this zone.

NS record – Lists a name server for this zone

MX record – Indicates the mail exchange server for the domain

A record – Name to IP Address mapping (gives the IP Address for a host)

CNAME record – Canonical name (used for aliases)

PTR record – Address to name mapping (used in db.ADDR)

The next article deals with the construction of our first zone data file, db.firewall.cx of our example firewall.cx domain.


LINUX BIND DNS – CONFIGURING DB.DOMAIN ZONE DATA FILE


It’s time to start creating our zone files. We’ll follow the standard format, which is given in the DNS RFCs, in order to keep everything neat and less confusing.

First step is to decide on the domain we’re using and we’ve decided on the popular firewall.cx. This means that the first zone file will bedb.firewall.cx. Note that this file is to be placed on the Master DNS server for our domain.

We will progressively build our database by populating it step by step and explaining each step we take. At the end of the step-by-step example, we’ll grab each step’s data and put it all together so we can see how the final version of our file will look. We strongly beleive, this is the best method of explaining how to create a zone file without confusing the hell out of everyone!

CONSTRUCTING DB.FIREWALL.CX – DB.DOMAIN

It is important at this point to make it clear that we are setting up a primary DNS server. For a simple DNS caching or secondary name server, the setup is a lot simpler and covered on the articles to come.

The first entry for our file is the Default TTL – Time To Live. This is defined using the $TTL control statement. $TTL specifies the time to live for all records in the file that follow the statement and don’t have an explicit TTL. We are going to set ours to 24 hours – 86400 seconds.

The units used are seconds. An older common TTL value for DNS was 86400 seconds, which is 24 hours. A TTL value of 86400 would mean that, if a DNS record was changed on the authoritative nameserver, DNS servers around the world could still be showing the old value from their cache for up to 24 hours after the change.

Newer DNS methods that are part of a DR (Disaster Recovery) system may have some records deliberately set extremely low on TTL. For example a 300 second TTL would help key records expire in 5 minutes to help ensure these records are flushed world wide quickly. This gives administrators the ability to edit and update records in a timely manner. TTL values are “per record” and setting this value on specific records is normally honored automatically by all standard DNS systems world-wide.   Dynamic DNS (DDNS) usually have the TTL value set to 5 minutes, or 300 seconds.

Next up is the SOA Record. The SOA (Start Of Authority) resource record indicates that this name server is the best source of information for the data within this zone (this record is required in each db.DOMAIN and db.ADDR file), which is the same as saying this name server is Authoritative for this zone. There can be only one SOA record in every data zone file (db.DOMAIN).

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (

                            1 ; Serial Number

3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 hour

Let’s explain the above code:

firewall.cx. is the domain name and must always be stated in the first column of our line, be sure you include the trailing dot “.” after the domain name, we’ll explain later on why this is needed.

The IN stands for Internet. This is one class of data and while other classes exist, you won’t see them at all because they are not used :)

The SOA is an important resource record. What follows is the actual primary name server for firewall.cx. In our example, this is the server named “voyager” and its Fully Qualified Domain Name (FQDN) is voyager.firewall.cx. Notice the trailing “.” is present here as well.

Next up is the entry admin.voyager.firewall.cx. which is the email address of the person responsible for this domain. Take the dot “.” after the admin entry and replace it with a “@” and you have a valid email address: admin@voyager.firewall.cx. Most times you will see root, postmaster or hostmaster instead of “admin”.

The “(” parentheses allow the SOA record to span more than one line, while in most cases the fields that follow are used by the secondary name servers and any other name server requesting information about the domain.

The serial number “1 ; Serial Number” entry is used by the secondary name server to keep track of changes that might have occured in the master’s zone file. When the secondary name server contacts the primary name server, it will check to see if this value is the same. If the secondary’s name server is lower than the primary’s, then its data is out of date and, when equal, it means the data is up to date. This means when you make any modifications to the primary’s zone file, you must increment the serial number at least by one.

Note that anything after the semicolon (;) is considered a remark and not taken into consideration by the DNS BIND Service. This allows us to create easy-to-understand comments for future reference.

The refresh “3h ; Refresh after 3 hours” tells the secondary name server how often to check the primary’s server’s data, to ensure its copy for this zone is up to date.

If the secondary name server tries to contact the primary and fails, the retry “1 h ; Retry after 1 hour” is used to tell the secondary name server how long to wait until it tries to contact the primary again.

If the secondary name server fails to contact the primary for longer than the time specified in the fourth entry “1 w ; Expire after 1 week“, then the zone data on the secondary name server is considered too old and will expire.

The last line “1 h ) ; Negative caching TTL of 1 day” is how long a name server will send negative responses about the zone. These negative responses say that a particular domain or type of data sought for a particular domain name doesn’t exist. Notice the SOA section finishes with the “)” parentheses.

Next up in the file are the name server (NS) records:

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

These entries define the two name servers (voyager and gateway) for our domain firewall.cx. These entries will be also in the db.ADDR file for this domain as we will see later on.

It’s time to enter our MX records. These records define the mail exchange servers for our domain, and this is how any client, host or email server is able to find a domain’s email server:

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

Let’s explain what exactly these entries mean. The first line specifies that voyager.firewall.cx is a mail exchanger for firewall.cx, just as the second line (…IN MX 20 gateway…) specifies that gateway.firewall.cx is also a mail exchanger for the domain. The MX record indicates that the following hosts are mail exchanger servers for the domain and the numbers 10 and 20 indicate the priority level. The smaller the number, the higher the priority.

This means that voyager.firewall.cx is a higher priority mail server than gateway.firewall.cx.  If another server trying to send email to firewall.cx fails to contact the highest priority mail server (voyager.firewall.cx), it will then fall back to the secondary, in which our case is gateway.firewall.cx.

These entries were introduced to prevent mail loops. When another email server (unlikely for a private domain like mine, but the same rule applies for the Internet) wants to send mail to firewall.cx, it will try to contact first the mail exchanger with the smallest number, which in our case is voyager.firewall.cx. The smaller the number, the higher the priority if there are more than one mail servers.

In our example, if we replaced:

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

with

firewall.cx. IN MX 50 voyager.firewall.cx.

firewall.cx. IN MX 100 gateway.firewall.cx.

the result in matter of server priority, would be the same.

Let’s now have a look our next part of our zone file: Host IP Addresses and Alias records:

; Host addresses defined here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

http://www.firewall.cx. IN CNAME voyager.firewall.cx.

Most fields in this section are easy to understand. We start by defining our localhost (local loopback) “localhost.firewall.cx. IN A 127.0.0.1” and continue with the servers on our private network, these include voyager, enterprise, gateway and admin. The “A” record stands for IP Address. So “voyager.firewall.cx. IN A 192.168.0.15” translates to a host called voyager located in the firewall.cx domain with an INternet ip Address of 192.168.0.15. See the pattern? :)

The second block has the aliases table, where we created a Canonical Name (CNAME) record. A CNAME record simply maps an alias to its canonical name; in our example, www is the alias and voyager.firewall.cx is the canonical name.

When a name server looks up a name and finds CNAME records, it replaces the name (alias – www) with its canonical name (voyager.firewall.cx) and looks up the canonical name (voyager.firewall.cx).

For example, when a name server looks up http://www.firewall.cx, it will replace the ‘www‘ with ‘voyager‘ and lookup the IP Address forvoyager.firewall.cx.

This also explains the existance of “www” in all URLs – it’s nothing more than an alias which, ultimately, is replaced with the CNAMErecord defined.

THE COMPLETE DB.DOMAIN CONFIGURATION FILE

That completes a simple domain setup! We have now created a working zone file that looks like this:

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (

                            1 ; Serial Number

3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 hour

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

; Host Addresses Defined Here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

http://www.firewall.cx. IN CNAME voyager.firewall.cx.

 

A quick glance at this file tells you a lot about our lab domain firewall.cx, and this is probably the best time to explain why we should not omit the trailing dot at the end of the domain name:

If we took gateway.firewall.cx as an example and omitted the dot “.” at the end of our entries, the system would translate it like this:gateway.firewall.cx.firewall.cx – definately not  what we want!

As you see, the ‘firewall.cx‘ is appended to the end of our Fully Qualified Domain Name for the particular resource record (gateway). This is why it’s so important to never forget that extra dot “.” at the end!

Our next article will cover the db.ADDR file, which will take the name db.192.168.0. for our example.


LINUX BIND DNS – CONFIGURING THE DB.192.168.0 ZONE DATA FILE


The db.192.168.0 zone data file is the second file we are creating for our DNS server. As outlined in the DNS-BIND Introduction, this file’s purpose is to provide the IP Address -to- name mappings. Note that this file is to be placed on the Master DNS server for our domain.

CONSTRUCTING DB.192.168.0

While we start to construct the file, you will notice many similarities with our previous file. Most resource records have already been covered and explained in our previous articles and therefore we will not repeat on this page.

The first line is our $TTL control statement, followed by the Start Of Authority (SOA) resource record:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after one week
1h ) ; Negative Caching TTL of 1 hourAs you can see, everything above, except the first column of the first line, is identical to the db.firewall.cx file. The “0.168.192.in-addr.arpa” entry is our IP network in reverse order. The trick to figure out your own in-addr.arpa entry is to simply take your network address, reverse it, and add an “.in-addr.arpa.” at the end

Name server resource records are next, follwed by the PTR resource record that creates our IP Address-to-name mappings. The syntax is nearly the same as the db.domain file, but keep in mind that we don’t enter the full reversed IP Address for the name servers but only the first 3 octecs which represent the network they belong to:

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.

0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

Time to look at the configuration file with all its entries:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after one week
1h ) ; Negative Caching TTL of 1 hour

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.
0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

This completes the db.192.168.0 Zone data file.

Remember the whole purpose of this file is to provide an IP Address-to-name mapping, which is why we do not use the domain name in front of each line, but the reversed IP Address followed by the in-addr.arpa. entry.


LINUX BIND DNS – COMMON BIND FILES – NAMED.LOCAL, NAMED.CONF, DB.127.0.0 ETC


So far we have covered in great detail the main files required for the firewall.cx domain. These files, which we named db.firewall.cx and db.192.168.0, define all the resouce records and hosts available in the firewall.cx domain.

We will be analysing these files in this article, to help you understand why they exist and how they fit into the big picture :)

OUR COMMON FILES

There are 3 common files that we’re going to look at, of which the first two files contents change slightly depending on the domain. This happens because they must be aware of the various hosts and the domain name for which they are created. The third file in the list below, is always the same amongst all DNS servers and we will explain more about it later on.

So here are our files:

  • named.local or db.127.0.0
  • named.conf
  • named.ca or db.cache

THE NAMED.LOCAL FILE

The named.local file, or db.127.0.0 as some might call it, is used to cover the loopback network. Since no one was given the responsibility for the 127.0.0.0 network, we need this file to make sure there are no errors when the DNS server needs to direct traffic to itself (127.0.0.1 IP Address – Loopback).

When installing BIND, you will find this file in your caching example directory: /var/named/caching-example, so you can either create a new one or modify the existing one to meet your requirements.

The file is no different than our example db.addr file we saw previously:

$TTL 864000.0.127.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 hour

0.0.127.in-addr.arpa. IN NS voyager.firewall.cx.
0.0.127.in-addr.arpa. IN NS gateway.firewall.cx.
1.0.0.127.in-addr.arpa. IN PTR localhost.

That’s all there is for named.local file !

THE NAMED.CA FILE

The named.ca file (also known as the “root hints file”) is created when you install BIND and dosen’t need to be modified unless you have an old version of BIND or it’s been a while since you installed BIND.

The purpose of this file is to let your DNS server know about the Internet ROOT Servers. There is no point displaying all of the file’s content because it’s quite big, so we will show an entry of a ROOT server to get the idea what it looks like:

; last update: Aug 22, 2011
; related version of root zone: 1997082200
; formerly NS.INTERNIC.NET

. 3600000 IN NS A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4

The domain name “.” refers to the root zone and the “3600000” is an explicit time to live for the records in the file, but it is generally ignored :)

The rest are self explanatory. If you want to grab a new copy of the root hint file you can ftp to ftp.rs.internic.net (198.41.0.6) and log on anonymously, there you will find the latest up to date version.

 

THE NAMED.CONF FILE

The named.conf file is usually located in the /etc directory and is the key file that ties all the zone data files together and lets the DNS server know where they are located in the system. This file is automatically created during the installation but you must edit it in order to add new entries that will point to any new zone files you have created.

Let’s have a close look at the named.conf file and explain:

options {
directory “/var/named”;

};

// Root Servers
zone “.” IN {
type hint;
file “named.ca”;
};

// Entry for Firewall.cx – name to ip mapping
zone “firewall.cx” IN {
type master;
file “db.firewall.cx”;
};

// Entry for Firewall.cx – ip to name mapping
zone “0.168.192.in-addr.arpa” IN {
type master;
file “db.192.168.0”;
};

// Entry for Local Loopback
zone “0.0.127.in-addr.arpa” IN {
type master;
file “named.local”;
};

At first glance it might seem a maze, but it’s a lot simpler than you think. Break down each paragraph and you can see clearly the pattern that follows.

Starting from the top, the options section simply defines the directory where all the files to follow are located, the rest are simply comments.

The root servers section tells the DNS server where to find the root hints file, which contains all the root servers.

Next up is the entry for our domain firewall.cx, we let the DNS server know which file contains all the zone entries for this domain and let it know that it will act as a master DNS server for the domain. The same applies for the entry to follow, which contains the IP to Name mappings, this is the 0.168.192.in-addr.arpa zone.

The last entry is required for the local loopback. We tell the DNS server which file contains the local loopback entries.

Notice the “IN” class that is present in each section? If we accidentally forget to include it in our zone files, it wouldn’t matter because the DNS server will automatically figure out the class from our named.conf file. It’s imperative not to forget the “IN” (Internet) class in thenamed.conf, whereas it really doesnt matter if you don’t put it in the zone files. It’s good practice still to enter it in the zone files as we did, just to make sure you don’t have any problems later on.

And that ends our discussion for the common DNS (BIND) files.  Next up is the configuration of our Linux BIND Slave/Secondary DNS server.


THE LINUX BIND SETUP & CONFIGURE SECONDARY (SLAVE) DNS SERVER


Setting up a Secondary (or Slave) DNS sever is much easier than you might think. All the hard work is done when you setup the Master DNS server by creating your database zone files and configuring named.conf.

If you are wondering how is it that the Slave DNS server is easy to setup, well you need to remember that all the Slave DNS server does is update its database from the Master DNS server (zone transfer) so almost all the files we configure on the Master DNS server are copied to the Slave DNS server, which acts as a backup in case the Master DNS server fails.

SETTING UP THE SLAVE DNS SERVER

Let’s have a closer look at the requirements for getting our Slave DNS server up and running.

Keeping in mind that the Slave DNS server is on another machine, we are assuming that you have downloaded and successfully installed the same BIND version on it. We need to copy 3 files from the Master DNS server, make some minor modifications to one file and launch our Slave DNS server…. the rest will happen automatically :)

SO WHICH FILES DO WE COPY?

The files required are the following:

  • named.conf (our configuration file)
  • named.ca or db.cache (the root hints file, contains all root servers)
  • named.local (local loopback for the specific DNS server so it can direct traffic to itself)

The rest of the files, which are our db.DOMAIN (db.firewall.cx for our example) and db.in-addr.arpa (db.192.168.0 for our example), will be transferred automatically (zone transfer) as soon as the newly brought up Slave DNS server contacts the Master DNS server to check for any zone files.

HOW DO I COPY THE FILES?

There are plenty of ways to copy the files between servers. The method you will use depends on where the servers are located. If, for example, they are right next to you, you can simply use a floppy disk to copy them or use ftp to transfer them.

If you’re going to try to transfer them over a network, and especially over the Internet, then you might consider something more secure than ftp. We would recommend you use SCP, which stands for Secure Copy and uses SSH (Secure SHell).

SCP can be used independently of SSH as long as there is an SSH server on the other side. SCP will allow you to transfer files over an encrypted connection and therefore is preferred for sensitive files, plus you get to learn a new command :)

The command used is as follows: scp localfile-to-copy username@remotehost:desitnation-folder. Here is the command line we used from our Gateway server (Master DNS): scp /etc/named.conf root@voyager:/etc/

Keep in mind that the files we copy are placed in the same directory as on the Master DNS server. Once we have copied all three files we need to modify the named.conf file. To make things simple, we are going to show you the original file copied from the Master DNS and the modified version which now sits on the Slave DNS server.

The Master named.conf file is a clear cut/paste from the “Common BIND Files” page, whereas the Slave named.conf has been modifed to suit our Slave DNS server. To help you identify the changes, we have marked them in red:

Master named.conf file

options {
directory “/var/named”;

};

// Root Servers
zone “.” IN {
type hint;
file “named.ca”;
};

// Entry for Firewall.cx – name to ip mapping
zone “firewall.cx” IN {
type master;
file “db.firewall.cx”;
};
// Entry for firewall.cx – ip to name mapping
zone “0.168.192.in-addr.arpa” IN {
type master;
file “db.192.168.0”;
};

// Entry for Local Loopback
zone “0.0.127.in-addr.arpa” IN {
type master;
file “named.local”;
};

Slave named.conf file

options {
directory “/var/named”;

};

// Root Servers
zone “.” IN {
type hint;
file “named.ca”;
};

// Entry for Firewall.cx – name to ip mapping
zone “firewall.cx” IN {
type slave;
file “bak.firewall.cx”;
masters { 192.168.0.10 ; } ;
};

// Entry for firewall.cx – ip to name mapping
zone “0.168.192.in-addr.arpa” IN {
type salve;
file “bak.192.168.0”;
masters { 192.168.0.10 ; } ;
};

// Entry for Local Loopback
zone “0.0.127.in-addr.arpa” IN {
type master;
file “named.local”;
};

As you can see, most of the slave’s named.conf file is similair to the master’s, except for a few fields and values, which we are going to explain right now.

The type value is now slave, and that’s pretty logical since it tells the dns server if it’s a master or slave.

The file “bak.firewall.cx“; entry basically tells the server what name to give the zone files once they are transfered from the master dns server. We tend to follow the bak.domain format because that’s the way we see the slave server, a backup dns server. It is not imperative to use this name scheme, you can change it to whatever you wish. Once the server is up and running, you will see these files soon appear in the /var/named directory.

Lastly, the masters {192.168.0.10}; entry informs our slave server that this is the IP Address of the master DNS which it needs to contact and retrieve the zone files.

That’s all there is to setup the slave DNS server ! As we mentioned, once the master is setup, the slave is a peice of cake cause it involves very few changes.

Our Final article covers the setup of  Linux BIND DNS Caching


LINUX BIND – DNS CACHING


In the previous articles, we spoke about the Internet Domain Hierarchy and explained how the ROOT servers are the DNS servers, which contain all the information about authoritative DNS servers the domains immediately below e.g firewall.cx, microsoft.com. In fact, when a request is passed to any of the ROOT DNS servers, they will redirect the client to the appropriate authoritative DNS server, that is, the DNS server in charge of the domain.

For example, if you’re trying to resolve firewall.cx and your machine contacts a ROOT DNS server, the server will point your computer to the DNS server in charge of the .CX domain, which in turn will point your computer to the DNS server in charge of firewall.cx, currently server with IP 74.200.90.5.

THE BIG PICTURE

As you can see, a simple DNS request can become quite a task in order to successfully resolve the domain. This also means that there’s a fair bit of traffic generated in order to complete the procedure. Whether you’re paying a flat rate to your ISP or your company has a permanent connection to the Internet, the truth is that someone ends up paying for all these DNS requests ! The above example was only for one computer trying to resolve one domain. Try to imagine a company that has 500 computers connected to the Internet or an ISP with 150,000 subscribers – Now you’re starting to get the big picture!

All that traffic is going to end up on the Internet if something isn’t done about it, not to mention who will be paying for it !

This is where DNS Caching comes in. If we’re able to cache all these requests, then we don’t need to ask the ROOT DNS or any other external DNS server as long as we are trying to resolve previously visited sites or domains, because our caching system would “remember” all the previous domains we visited (and therefore resolved) and would be able to give us the IP Address we’re looking for !

Note: You should keep in mind that when you install BIND, by default it’s setup to be a DNS Caching server, so all you need to do it startup the service, which is called ‘named’.

Almost all Internet name servers use name caching to optimise search costs. Each of these servers maintains a cache which contains all recently used names as well as a record of where the mapping information for that name was obtained. When a client (e.g your computer) asks the server to resolve a domain, the server will first check to see whether it has authority (meaning if it is in charge) for that domain. If not, the server checks its cache to see if the domain is in there and it will find it if it’s been recently resolved.

Assuming that the server does find it in the cache, it will take the information and pass it on to the client but also mark the information as a nonauthoritative binding, which means the server tells the client “Here is the information you required, but keep in mind, I am not in charge of this domain”.

The information can be out of date and, if it is critical for the client that it does not receive such information, it will then try to contact the authoritative DNS server for the domain and obtain the up to date information it requires.

DNS CACHING DOES COME WITH ITS PROBLEMS!

As you can clearly see, DNS caching can save you a lot of money, but it comes with its problems !

Caching does work well in the domain name system because name to address binding changes infrequently. However, it does change. If the servers cached the information the first time it was requested and never change that information, the entries in the cache could become incorrect.

 

THE SOLUTION

Fortunately there is a solution that will prevent DNS servers from giving out incorrect information. To ensure that the information in the cache is correct, every DNS server will time each entry and dispose of the ones that have exceeded a reasonable time. When a DNS server is asked for the information after it has removed the entry from its cache, it must go back to the authoritative source and obtain it again.

Whenever an authoritative DNS server responds to a request, it includes a Time To Live (TTL) value in the response. This TTL value is set in the zone files as you’ve probably already seen in the previous pages.

If you manage DNS server an are planning to introduce changes like redelegate (move) your domain to some other hosting company or if the IP Address your website currently has or changing mail servers, in the next couple weeks, then it’s a good idea to get your TTL changes to a very small value well before the scheduled changes. Reason for this is because any dns server that will query your domain, website or any resource record that belongs to your domain, will cache the data for the amount of time the TTL is set.

By decreasing the $TTL value to e.g 1 hour, this will ensure that all dns data from your domain will expire in the requesters cache 1 hour after it was received. If you didn’t do this, then the servers and clients (simple home users) who access your site or domain, will cache the dns data for the currently set time, which is normaly around 3 days. Not a good thing when you make a big change :)

So keep in mind all the above when your about the perform a change in the DNS server zone files. a couple of days before making the change, decrease the $TTL value to a reasonable value, not more than a few hours, and then once you complete the change, be sure you set it back to what it was.

We hope this has given you an insight on how you can save yourself, or company money and problems which occur when changing field and values in the DNS zone files!

Hacking with Nikto – A tutorial for beginners


Nikto

Nikto is a vulnerability scanner that scans webservers for thousands of vulnerabilities and other known issues. It is very easy to use and does everything itself, without much instructions. It is included by default in pen testing distros like Kali linux. On other oses/platforms you need to install it manually. Can be downloaded from http://cirt.net/Nikto2.

The website describes nikto as follows

Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6500 potentially dangerous files/CGIs, checks for outdated versions of over 1250 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.

Install nikto on ubuntu

On Ubuntu nikto can be installed directly from synaptic manager.

$ sudo apt-get install nikto

Nikto is written in perl, so you need to have perl installed to be able to run it.

Install nikto on windows

On windows first install the perl interpreter. It can be downloaded fromhttp://www.activestate.com/activeperl. Its free. Download the installer and install perl.

Next download nikto and extract the contents of the archive into a directory. Now run from the command prompt like this.

C:\pentest\nikto-2.1.5>perl nikto.pl -h example.com

The above command actually runs the perl interpreter which loads the nikto.pl source file and runs it with whatever options are provided next to it.

Using Nikto

Lets now use nikto on some webserver and see what kind of things it can do. Lets try a test against a certain php+mysql website that is hosted on apache. The actual urls shall not be shown in the output

$ nikto -h somesite.org
- Nikto v2.1.4
---------------------------------------------------------------------------
+ Target IP:          208.90.215.95
+ Target Hostname:    somesite.org
+ Target Port:        80
+ Start Time:         2012-08-11 14:27:31
---------------------------------------------------------------------------
+ Server: Apache/2.2.22 (FreeBSD) mod_ssl/2.2.22 OpenSSL/1.0.1c DAV/2
+ robots.txt contains 4 entries which should be manually viewed.
+ mod_ssl/2.2.22 appears to be outdated (current is at least 2.8.31) (may depend on server version)
+ ETag header found on server, inode: 5918348, size: 121, mtime: 0x48fc943691040
+ mod_ssl/2.2.22 OpenSSL/1.0.1c DAV/2 - mod_ssl 2.8.7 and lower are vulnerable to a remote buffer overflow which may allow a remote shell (difficult to exploit). CVE-2002-0082, OSVDB-756.
+ Allowed HTTP Methods: GET, HEAD, POST, OPTIONS, TRACE 
+ OSVDB-877: HTTP TRACE method is active, suggesting the host is vulnerable to XST
+ /lists/admin/: PHPList pre 2.6.4 contains a number of vulnerabilities including remote administrative access, harvesting user info and more. Default login to admin interface is admin/phplist
+ OSVDB-2322: /gallery/search.php?searchstring=<script>alert(document.cookie)</script>: Gallery 1.3.4 and below is vulnerable to Cross Site Scripting (XSS). Upgrade to the latest version. http://www.securityfocus.com/bid/8288.
+ OSVDB-7022: /calendar.php?year=<script>alert(document.cookie);</script>&month=03&day=05: DCP-Portal v5.3.1 is vulnerable to  Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html.
+ OSVDB-3233: /phpinfo.php: Contains PHP configuration information
+ OSVDB-3092: /system/: This might be interesting...
+ OSVDB-3092: /template/: This may be interesting as the directory may hold sensitive files or reveal system information.
+ OSVDB-3092: /updates/: This might be interesting...
+ OSVDB-3092: /README: README file found.
+ 6448 items checked: 1 error(s) and 14 item(s) reported on remote host
+ End Time:           2012-08-11 15:52:57 (5126 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested
$

The output has lots of useful information. Nikto has detected the following :

1. Webserver
2. XSS vulnerabilitites.
3. Vulnerable web applications like phplist and gallery.
4. Information leaking pages.

Nikto also provides the osvdb numbers of the issues for further analysis. So overall nikto is a very informative tool. The next task for a hacker should be to find out how to exploit one of the so many vulnerabilities found out.

Most of the tests done by nikto are based on set rules or a dictionary. For example nikto has a list of default directories to look for, list of files to look for. So the entire scanning process just enumerates the presence of predefined urls on the http server. Apart from this nikto also looks into the http headers for additional information and also tests get parameters for xss vulnerabilities.

Check the additional options supported by nikto using the help switch as follows

root@kali:~# nikto -Help

Analysing nikto

To understand how nikto works and discovers vulnerabilities we can analyse it further. Nikto has an option to use an http proxy. So by using a tool that can intercept the http requests and show them in proper format, we can analyse the queries made by nikto. One such tool is burp suite. It has an integrated http proxy. It has a free edition that we are going to use.

Download free edition of burp suite from
http://www.portswigger.net/burp/

Burp suite is written in java, so the JRE is needed to run it. On ubuntu it can be installed from synaptic package manager. Start the burp suite and go to proxy tab. The proxy tab has 3 sub tabs namely : intercept, options and history. In the intercept tab turn intercept off. Otherwise burp suite will ask for a confirmation before allowing each request. Then go to the history tab. The history tab will show us all requests that nikto shall be making.

Next we need to tell nikto to use the proxy server. The command to use proxy would be

$ nikto -host www.binarytides.com -useproxy http://localhost:8080/

Here is a screenshot of how the burp suite would show the requests.

Burp suite provides a bunch of information, like the request, response, headers etc.

Scan website for vulnerabilities with Uniscan


Uniscan is a vulnerability scanner that can scan websites and web applications for various security issues like LFI, RFI, sql injection, xss etc. Its written in perl. Its open source and can be downloaded from sourceforge project page at http://sourceforge.net/projects/uniscan/.

It is included in backtrack and can be found at the following directory

/pentest/web/uniscan

In the Backtrack menu its located at Vulnerability Assessment > Web Application Assessment > Web Vulnerability Scanner > uniscan.

On kali linux run it directly from terminal by issuing the command ‘uniscan’.

In this post we shall learn how to use this tool to scan websites. Usage is quite simple. Run the script uniscan.pl to see the options and examples

Basic scanning

root@kali:~# uniscan
####################################
# Uniscan project                  #
# http://uniscan.sourceforge.net/  #
####################################
V. 6.2


OPTIONS:
	-h 	help
	-u 	<url> example: https://www.example.com/
	-f 	<file> list of url's
	-b 	Uniscan go to background
	-q 	Enable Directory checks
	-w 	Enable File checks
	-e 	Enable robots.txt and sitemap.xml check
	-d 	Enable Dynamic checks
	-s 	Enable Static checks
	-r 	Enable Stress checks
	-i 	<dork> Bing search
	-o 	<dork> Google search
	-g 	Web fingerprint
	-j 	Server fingerprint

usage: 
[1] perl ./uniscan.pl -u http://www.example.com/ -qweds
[2] perl ./uniscan.pl -f sites.txt -bqweds
[3] perl ./uniscan.pl -i uniscan
[4] perl ./uniscan.pl -i "ip:xxx.xxx.xxx.xxx"
[5] perl ./uniscan.pl -o "inurl:test"
[6] perl ./uniscan.pl -u https://www.example.com/ -r

The usage section shows examples on using it. To scan a website, use the first example from the usage section.

root@kali:~# uniscan -u http://www.example.com/ -qweds

The above example scans a single url. With the f option multiple sites can be put under the scanner. The list has to be provided as a txt file.

Fingerprinting

With the option ‘j’ uniscan would fingerprint the server of the url. Server fingerprinting simply runs commands like ping, traceroute, nslookup, nmap on the server ip address and packs the results together.

root@kali:~# uniscan -u http://www.example.com -j

Another option is ‘g’ which does web based fingerprinting. It looks up specific urls.

root@kali:~# uniscan -u http://www.example.com -g

Searching google and bing

Apart from scanning websites, uniscan has another cool feature of performing google and bing searches and collecting the results in a simple text file. The i option can be used for searching bing and o operator for google. To search bing for all domains hosted on a given ip address issue the following command

root@kali:~# uniscan -i "ip:xxx.xxx.xxx.xxx"

Replace the x with the ip. The results are saved in a file called sites.txt which can be found at ‘/usr/share/uniscan’. They should ideally be saved in the home directory of the user or the working directory.

To search google using a term

root@kali:~# uniscan -o 'inurl:"section.php?id="'

However google will block too many automated search queries. So use the tool carefully.