Tuesday, 7 April 2015

History of the Internet::-
             
                   
                Len Kleinrock and the first Interface
       Message Processor 

     The history of the Internet begins with the development of electronic computers in the
1950s. Initial concepts of packet networking originated in several computer science laboratories in the United States, Great Britain, and France. The US Department of
Defense awarded contracts as early as the
1960s for packet network systems, including the
development of the ARPANET (which would
become the first network to use the Internet
Protocol .) The first message was sent over the
ARPANET from computer science Professor
Leonard Kleinrock's laboratory at University of
California, Los Angeles ( UCLA ) to the second
network node at Stanford Research Institute
( SRI ).
Packet switching networks such as ARPANET,
Mark I at NPL in the UK, CYCLADES , Merit
Network, Tymnet, and Telenet, were developed
in the late 1960s and early 1970s using a
variety of communications protocols . The
ARPANET in particular led to the development
of protocols for internetworking , in which
multiple separate networks could be joined into
a network of networks.
Access to the ARPANET was expanded in 1981
when the National Science Foundation (NSF)
funded the Computer Science Network (CSNET).
In 1982, the Internet protocol suite (TCP/IP)
was introduced as the standard networking
protocol on the ARPANET. In the early 1980s
the NSF funded the establishment for national
supercomputing centers at several universities,
and provided interconnectivity in 1986 with the
NSFNET project, which also created network
access to the supercomputer sites in the United
States from research and education
organizations. Commercial Internet service
providers (ISPs) began to emerge in the late
1980s. The ARPANET was decommissioned in
1990. Private connections to the Internet by
commercial entities became widespread quickly,
and the NSFNET was decommissioned in 1995,
removing the last restrictions on the use of the
Internet to carry commercial traffic.
In the 1980s, the work of Tim Berners-Lee in
the United Kingdom , on the World Wide Web,
theorised the fact that protocols link hypertext
documents into a working system, [1] marking
the beginning the modern Internet. Since the
mid-1990s, the Internet has had a revolutionary
impact on culture and commerce, including the
rise of near-instant communication by
electronic mail, instant messaging, voice over
Internet Protocol (VoIP) telephone calls, two-
way interactive video calls , and the World Wide
Web with its discussion forums , blogs, social
networking , and online shopping sites. The
research and education community continues to
develop and use advanced networks such as
NSF's very high speed Backbone Network
Service (vBNS), Internet2 , and National
LambdaRail. Increasing amounts of data are
transmitted at higher and higher speeds over
fiber optic networks operating at 1-Gbit/s, 10-
Gbit/s, or more. The Internet's takeover of the
global communication landscape was almost
instant in historical terms: it only communicated
1% of the information flowing through two-way
telecommunications networks in the year 1993,
already 51% by 2000, and more than 97% of
the telecommunicated information by 2007. [2]
Today the Internet continues to grow, driven by
ever greater amounts of online information,
commerce, entertainment, and social
networking .

Internet history timeline from blog : Netizen Kondaba

Early research and development:
1961 – First packet-switching papers
1966 – Merit Network founded
1966 – ARPANET planning starts
1969 – ARPANET carries its first packets
1970 – Mark I network at NPL (UK)
1970 – Network Information Center (NIC)
1971 – Merit Network's packet-switched network operational
1971 – Tymnet packet-switched network
1972 – Internet Assigned Numbers Authority (IANA) established
1973 – CYCLADES network demonstrated
1974 – Telenet packet-switched network
1976 – X.25 protocol approved
1978 – Minitel introduced
1979 – Internet Activities Board (IAB)
1980 – USENET news using UUCP
1980 – Ethernet standard introduced
1981 – BITNET established Merging the networks and creating the Internet:
1981 – Computer Science Network (CSNET)
1982 – TCP/IP protocol suite formalized
1982 – Simple Mail Transfer Protocol (SMTP)
1983 – Domain Name System (DNS)
1983 – MILNET split off from ARPANET
1985 – First .COM domain name registered
1986 – NSFNET with 56 kbit/s links
1986 – Internet Engineering Task Force (IETF)
1987 – UUNET founded
1988 – NSFNET upgraded to 1.5 Mbit/s (T1)
1988 – OSI Reference Model released
1988 – Morris worm
1989 – Border Gateway Protocol (BGP)
1989 – PSINet founded, allows commercial traffic
1989 – Federal Internet Exchanges (FIXes)
1990 – GOSIP (without TCP/IP )
1990 – ARPANET decommissioned
1990 – Advanced Network and Services (ANS)
1990 – UUNET/Alternet allows commercial traffic
1990 – Archie search engine
1991 – Wide area information server (WAIS)
1991 – Gopher
1991 – Commercial Internet eXchange (CIX)
1991 – ANS CO+RE allows commercial traffic
1991 – World Wide Web (WWW)
1992 – NSFNET upgraded to 45 Mbit/s (T3)
1992 – Internet Society (ISOC) established
1993 – Classless Inter-Domain Routing (CIDR)
1993 – InterNIC established
1993 – Mosaic web browser released
1994 – Full text web search engines
1994 – North American Network Operators' Group (NANOG) established Commercialization, privatization, broader access leads to the modern Internet:      
1995 – New Internet architecture withcommercial ISPs connected at NAPs
1995 – NSFNET decommissioned
1995 – GOSIP updated to allow TCP/IP
1995 – very high-speed Backbone Network
Service (vBNS)
1995 – IPv6 proposed
1998 – Internet Corporation for Assigned
Names and Numbers (ICANN)
1999 – IEEE 802.11b wireless networking
1999 – Internet2 /Abilene Network
1999 – vBNS+ allows broader access
2000 – Dot-com bubble bursts
2001 – New top-level domain names activated
2001 – Code Red I , Code Red II , and Nimda worms
2003 – UN World Summit on the Information Society (WSIS) phase I
2003 – National LambdaRail founded
2004 – UN Working Group on Internet Governance (WGIG)
2005 – UN WSIS phase II
2006 – First meeting of the Internet Governance Forum
2010 – First internationalized country code top-level domains registered
2012 – ICANN begins accepting applications for new generic top-level domain names
           Examples of popular Internet services:
1990 – IMDb Internet movie database
1995 – Amazon.com online retailer
1995 – eBay online auction and shopping
1995 – Craigslist classified advertisements
1996 – Hotmail free web-based e-mail
1997 – Babel Fish automatic translation
1998 – Google Search
1998 – Yahoo! Clubs (now Yahoo! Groups)
1998 – PayPal Internet payment system
1999 – Napster peer-to-peer file sharing
2001 – BitTorrent peer-to-peer file sharing
2001 – Wikipedia , the free encyclopedia
2003 – LinkedIn business networking
2003 – Myspace social networking site
2003 – Skype Internet voice calls
2003 – iTunes Store
2003 – 4Chan Anonymous image-based bulletin board
2003 – The Pirate Bay, torrent file host
2004 – Facebook social networking site
2004 – Podcast media file series
2004 – Flickr image hosting
2005 – YouTube video sharing
2005 – Reddit link voting
2005 – Google Earth virtual globe
2006 – Twitter microblogging
2007 – WikiLeaks anonymous news and information leaks
2007 – Google Street View
2007 – Kindle , e-reader and virtual bookshop
2008 – Amazon Elastic Compute Cloud (EC2)
2008 – Dropbox cloud-based file hosting
2008 – Encyclopedia of Life, a collaborative encyclopedia intended to document all living species
2008 – Spotify, a DRM-based music streaming service
2009 – Bing search engine
2009 – Google Docs , Web-based word processor, spreadsheet, presentation, form, and data storage     service
2009 – Kickstarter , a threshold pledge system
2009 – Bitcoin , a digital currency
2010 – Instagram , photo sharing and social networking
2011 – Google+ , social networking
2011 – Snapchat , photo sharing

Further information: Timeline of popular Internet services Precursors
See also: Victorian Internet
The telegraph system is the first fully digital
communication system. Thus the Internet has
precursors, such as the telegraph system, that
date back to the 19th century, more than a
century before the digital Internet became
widely used in the second half of the 1990s. The
concept of data communication – transmitting
data between two different places, connected
via some kind of electromagnetic medium, such
as radio or an electrical wire – predates the
introduction of the first computers. Such
communication systems were typically limited to
point to point communication between two end
devices. Telegraph systems and telex machines
can be considered early precursors of this kind
of communication.
Fundamental theoretical work in data
transmission and information theory was
developed by Claude Shannon, Harry Nyquist,
and Ralph Hartley , during the early 20th
century.
Early computers used the technology available
at the time to allow communication between the
central processing unit and remote terminals. As
the technology evolved, new systems were
devised to allow communication over longer
distances (for terminals) or with higher speed
(for interconnection of local devices) that were
necessary for the mainframe computer model.
Using these technologies made it possible to
exchange data (such as files) between remote
computers. However, the point to point
communication model was limited, as it did not
allow for direct communication between any two
arbitrary systems; a physical link was necessary.
The technology was also deemed as inherently
unsafe for strategic and military use, because
there were no alternative paths for the
communication in case of an enemy attack.
Three terminals and an ARPA
Main articles: RAND Corporation and ARPANET
A pioneer in the call for a global network, J. C.
R. Licklider, proposed in his January 1960 paper,
" Man-Computer Symbiosis ": "A network of such
[computers], connected to one another by wide-
band communication lines [which provided] the
functions of present-day libraries together with
anticipated advances in information storage and
retrieval and [other] symbiotic functions." [3]
In August 1962, Licklider and Welden Clark
published the paper "On-Line Man-Computer
Communication", [4] which was one of the first
descriptions of a networked future.
In October 1962, Licklider was hired by Jack
Ruina as director of the newly established
Information Processing Techniques Office (IPTO)
within DARPA , with a mandate to interconnect
the United States Department of Defense 's
main computers at Cheyenne Mountain, the
Pentagon, and SAC HQ. There he formed an
informal group within DARPA to further
computer research. He began by writing memos
describing a distributed network to the IPTO
staff, whom he called "Members and Affiliates
of the Intergalactic Computer Network". [5] As
part of the information processing office's role,
three network terminals had been installed: one
for System Development Corporation in Santa
Monica , one for Project Genie at the University
of California, Berkeley and one for the
Compatible Time-Sharing System project at the
Massachusetts Institute of Technology (MIT).Licklider's identified need for inter-networking
would be made obvious by the apparent waste of
resources this caused.





This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became
the world's first Web server.

The post by Internet_Lover Kondaba Deshmukh in the blog Netizen Kondaba.
Thanks.

For more information about the history of Internet click here

An evolution of mobile phones:-

 A mobile phone (also known as a cellular phone,
cell phone , hand phone , or simply a phone) is a
phone that can make and receive telephone calls
over a radio link while moving around a wide
geographic area. It does so by connecting to a
cellular network provided by a mobile phone
operator, allowing access to the public telephone
network . By contrast, a cordless telephone is
used only within the short range of a single,
private base station.
In addition to telephony, modern mobile phones
also support a wide variety of other services
such as text messaging, MMS , email , Internet
access, short-range wireless communications
( infrared , Bluetooth ), business applications,
gaming, and photography. Mobile phones that
offer these and more general computing
capabilities are referred to as smartphones .
The first hand-held cell phone was
demonstrated by John F. Mitchell[1][2] and Dr.
Martin Cooper of Motorola in 1973, using a
handset weighing around 4.4 pounds (2 kg). [3]
In 1983, the DynaTAC 8000x was the first to be
commercially available. From 1983 to 2014,
worldwide mobile phone subscriptions grew from
zero to over 7 billion, penetrating 100% of the
global population and reaching the bottom of
the economic pyramid . [4] In 2014, the top cell
phone manufacturers were Samsung, Nokia ,Apple , and LG .


History of mobile phones
Martin Cooper Motorola made the
first publicized handheld mobile phone
call on a prototype DynaTAC model on
April 4, 1973. This is a reenactment in
2007.
                           
The Motorola
DynaTAC 8000X
from 1984 (First
hand-held cellular
mobile phone which
was commercially
available)
A hand-held mobile radiotelephone is an old
dream of radio engineering. One of the earliest
descriptions can be found in the 1948 science
fiction novel Space Cadet by Robert Heinlein .
The protagonist, who has just traveled to
Colorado from his home in Iowa, receives a call
from his father on a telephone in his pocket.
Before leaving for earth orbit, he decides to
ship the telephone home "since it was limited by
its short range to the neighborhood of an
earth-side [i.e. terrestrial] relay office." Ten
years later, an essay by Arthur C. Clarke
envisioned a "personal transceiver, so small and
compact that every man carries one." Clarke
wrote: "the time will come when we will be able
to call a person anywhere on Earth merely by
dialing a number." Such a device would also, in
Clarke's vision, include means for global
positioning so that "no one need ever again be
lost." In his 1962 Profiles of the Future , he
predicted the advent of such a device taking
place in the mid-1980s.
Early predecessors of cellular phones included
analog radio communications from ships and
trains. The race to create truly portable
telephone devices began after World War II, with
developments taking place in many countries.
The advances in mobile telephony have been
traced in successive generations from the early
"0G" (zeroth generation) services like the Bell
System 's Mobile Telephone Service and its
successor, Improved Mobile Telephone Service .
These "0G" systems were not cellular, supported
few simultaneous calls, and were very expensive.
The first handheld mobile cell phone was
demonstrated by Motorola in 1973. The first
commercial automated cellular network was
launched in Japan by NTT in 1979. In 1981, this
was followed by the simultaneous launch of the
Nordic Mobile Telephone (NMT) system in
Denmark, Finland, Norway and Sweden. [7]
Several other countries then followed in the
early to mid-1980s. These first generatiion
("1G") systems could support far more
simultaneous calls, but still used analog
technology.
In 1991, the second generation ( 2G ) digital
cellular technology was launched in Finland by
Radiolinja on the GSM standard, which sparked
competition in the sector, as the new operators
challenged the incumbent 1G network operators.
Ten years later, in 2001, the third generation
( 3G ) was launched in Japan by NTT DoCoMo on
the WCDMA standard. [8] This was followed by
3.5G, 3G+ or turbo 3G enhancements based on
the high-speed packet access (HSPA) family,
allowing UMTS networks to have higher data
transfer speeds and capacity.
By 2009, it had become clear that, at some
point, 3G networks would be overwhelmed by the
growth of bandwidth-intensive applications like
streaming media. [9] Consequently, the industry
began looking to data-optimized 4th-generation
technologies, with the promise of speed
improvements up to 10-fold over existing 3G
technologies. The first two commercially
available technologies billed as 4G were the
WiMAX standard (offered in the U.S. by Sprint)
and the LTE standard, first offered in
Scandinavia by TeliaSonera.
 Features:
All mobile phones have a number of features in
common, but manufacturers also try to
differentiate their own products by
implementing additional functions to make them
more attractive to consumers. This has led to
great innovation in mobile phone development
over the past 20 years.
The common components found on all phones are:
A battery, providing the power source for
the phone functions.
An input mechanism to allow the user to
interact with the phone. The most common input
mechanism is a keypad , but touch screens are
also found in most smartphones.
A screen which echoes the user's typing,
displays text messages, contacts and more.
Basic mobile phone services to allow users to
make calls and send text messages.
All GSM phones use a SIM card to allow an
account to be swapped among devices. Some
CDMA devices also have a similar card called a
R-UIM .
Individual GSM, WCDMA, iDEN and some
satellite phone devices are uniquely identified
by an International Mobile Equipment Identity
( IMEI) number.
Low-end mobile phones are often referred to as
feature phones , and offer basic telephony.
Handsets with more advanced computing ability
through the use of native software applications
became known as smartphones .
In sound quality however, smartphones and
feature phones vary little. Some audio-quality
enhancing features like Voice over LTE, HD
voice have appeared and are often available on
newer smartphones. Sound quality can remain a
problem with both, as this depends, not so much
on the phone itself, as on the quality of the
network, and in case of long distance calls , the
bottlenecks/choke points met along the way. [10]
As such, on long-distance calls even features
such as Voice over LTE, HD voice may not
improve things. In some cases smartphones can
improve audio quality even on long-distance
calls, by using VoIP phone service, with someone
else's WiFi/internet connection. [11]
Several phone series have been introduced to
address specific market segments, such as the
RIM BlackBerry focusing on enterprise/
corporate customer email needs; the Sony-
Ericsson 'Walkman' series of music/phones and
' Cyber-shot ' series of camera/phones; the
Nokia Nseries of multimedia phones, the Palm
Pre the HTC Dream and the Apple iPhone .
Text Messaging:
The most commonly used data application on
mobile phones is SMS text messaging. The first
SMS text message was sent from a computer to
a mobile phone in 1992 in the UK, while the first
person-to-person SMS from phone to phone was
sent in Finland in 1993.
The first mobile news service, delivered via SMS,
was launched in Finland in 2000, and
subsequently many organizations provided "on-
demand" and "instant" news services by SMS.

Sim Card:
GSM feature phones require a small microchip
called a Subscriber Identity Module or SIM card ,
to function. The SIM card is approximately the
size of a small postage stamp and is usually
placed underneath the battery in the rear of
the unit. The SIM securely stores the service-
subscriber key (IMSI) and the K i used to
identify and authenticate the user of the
mobile phone. The SIM card allows users to
change phones by simply removing the SIM card
from one mobile phone and inserting it into
another mobile phone or broadband telephony
device, provided that this is not prevented by a
SIM lock.
The first SIM card was made in 1991 by Munich
smart card maker Giesecke & Devrient for the
Finnish wireless network operator
Radiolinja. [ citation needed ]

List of mobile phone makers:-
Quantity Market Shares by Gartner
(New Sales)
BRAND Percent
Samsung 2012 22.0%
Samsung 2013 24.6%
Nokia 2012 19.1%
Nokia 2013 13.9%
Apple 2012 7.5%
Apple 2013 8.3%
LG Electronics 2012 3.3%
LG Electronics 2013 3.8%
ZTE 2012 3.9%
ZTE 2013 3.3%
Others 2012 34.9%
Others 2013 34.0%
Note: Others-1 consist of Sony Ericsson, Motorola,
ZTE, HTC and Huawei.(2009-2010)
click here to read such articles from Netizen Kondaba
Thanks.

10 Tech Acronyms You Must
Know:

Takeaway: In the tech field, there's a lot of jargon
that's totally unfamiliar to those who don't call
themselves geeks.
The technology industry loves
its acronyms. Terms like
HTML, GUI , SSL , HTTP , Wi-
Fi, RAM, and LAN have been
so common for so long that
even the average user
understands many of them
right away. But with hundreds - possibly even
thousands - of IT acronyms being thrown around
(not to mention more being added all the time) it
can be hard to keep track of them all. Here are
the top 10 tech acronyms you should know now.
  RFID - Radio Frequency        Identification

  Call it an "intelligent label," or even a "super bar
code." RFID tags are readable codes that can
contain more information than Universal Product
Code (UPC) labels, or even QR codes . You may have
seen these small, typically square tags already.
They’re clear plastic with what looks like circuit
boards etched onto them, and can be found inside
DVD packaging and other products.
RFID tags have the ability to "talk" to a
networked system and convey data. They are
primarily used to track things - retail
merchandise, vehicles, pets, airline passengers and
even Alzheimer’s patients. There are passive, semi-
passive and active RFID tags. In the not-too-
distant future, we may even see talking tags. Even
the U.S. government uses RFID tags. In fact,
they’re embedded in each and every U.S. passport.
Awareness of RFID technology is essential for
anyone working in technology. It's also related to
our next acronym ...
 NFC - Near Field Communication

  If you’ve ever tapped a credit card against a
terminal to make a payment, or tapped your
smartphone on a shelf label to get product
information, you’ve used near field communication
(NFC) technology. This contactless form of
transferring data uses RFID standards, making the
two terms closely related.
NFC-enabled devices can read passive information
stored in RFID tags. However, this technology is
actually a step ahead. Whereas RFID can only store
information, NFC can both send and receive it. So,
two smartphones equipped with NFC technology can
"talk" to each other, with both devices
participating in the "conversation."
The primary use for NFC right now is contactless
or mobile payments. In the future, this technology
may be used for enterprise access or verification,
public services and transit systems, device-to-
device collaboration for business and gaming, and
more.
  SMO - Social Media Optimization

  Search engine optimization (SEO) is an established
strategy for Internet marketers that aims to
increase websites’ rankings on search engines. That
acronym's old news though. Now, with social
networks invading every aspect of our lives, their
influence is heavily impacting search engine results,
giving rise to a newer term: social media
optimization (SMO).
SMO is not synonymous with SEO, although it’s
often considered one aspect of an overall sound
SEO strategy. Businesses using SMO are looking to
optimize their websites and syndicated content for
fast, hopefully viral distribution through social
sharing. This increases their perceived authority,
which in turn gives them more weight in search
engine rankings.
  ESN - Enterprise Social Networking

Another term arising from the popularity of social
media, enterprise social networking (ESN), is
actually separate from "regular" social media. This
term refers to internalized social network activity
on platforms like Yammer, Jive, or Convo, which is
limited to communication between company staff,
vendors, partners and customers.
  REEF - Retainable Evaluator Execution Framework

Big data is big news, and like everything that’s
important in tech, Microsoft has jumped on board.
The Retainable Evaluator Execution Framework
(REEF) is big data technology from Microsoft that
the company has open sourced for developers.
REEF runs on top of YARN (a "joke" acronym that
stands for Yet Another Resource Negotiator), the
next-generation resource manager from Hadoop .

  NoSQL - Not only Structured Query Language
A departure from traditional databases, NoSQL is
a cloud-friendly, non-relational database that
offers high performance, availability and
scalability. Designed to handle the messy and
unpredictable data that has become normal in
today’s digital world, NoSQL isn’t built on tables,
and doesn’t use traditional SQL. Instead, it
supports BigTables, graph databases, and key-
value and document stores. (Get the lowdown on
NoSQL in NoSQL 101.)
  SDE - Software-Defined Everything

Software-defined everything (SDE) is a catch-all
term refers to a broad group of tech
functionalities that rely on software, rather than
traditional hardware, to perform. Software-
defined networking (SDN) was the first component
to come into popular use, a technology that allows
networks to be controlled from a centralized
software dashboard rather than physical hardware.
It was followed by software-defined storage (SDS)
and software-defined data centers (SDDC).
Software-defined everything (SDE) is the
movement toward a broader trend that aims to
make computing faster, more widely available and
more affordable.
  AaaS - Analytics as a Service

  The -aaS family of acronyms refers to the on-
demand services that have replaced the more
traditional one-time, high-investment technologies
of the past. This group started with Software as a
Service (SaaS), which offers many types of
software from newly developed to enterprise-
grade staples as a monthly, cloud-hosted service
instead of an installation on physical machines.
Analytics as a Service (AaaS) joins SaaS,
Infrastructure as a Service (IaaS) Platform as a
Service (PaaS) to give businesses a more
competitive chance at implementing data insights
without having to invest in full-blown analytics
platforms - or hire consultants.
  IoT - the Internet of Things

  Like something straight out of science fiction, the
Internet of Things (IoT) allows "things" (people,
animals, and objects) to transmit information
automatically over a network, without interacting
with a computer or another person. A few examples
of the IoT include tire pressure sensors in vehicles,
biochip transponders implanted in farm animals,
and heart monitor implants for humans. Basically,
the IoT promotes everyday connectivity between
everything.
This data is transmitted using unique IP address
identifiers. With increase in address space following
IPv6 , there are enough identifiers for every atom
on the planet, with plenty left to spare.
  NBIC - Nanotechnology, Biotechnology, Information
Technology, Cognitive Science

  This mouthful of a term, sometimes shortened to
Nano-Bio-Info-Cogno (but mostly called NBIC), is
the current overall term that refers to the latest
emerging and converging technologies. NBIC covers
developments that affect biomedical informatics
and improve human performance. This convergence
has the potential to transform humanity, such as
the use of 3-D printing to create working
artificial limbs.
In the tech field, you not only need to understand,
well, technology, you also need to know the jargon
that's totally unfamiliar to those who don't call
themselves geeks. Of course, these acronyms may
be common language in no time. Many of them
already are. So, how many of them did you know?

Learn more and Enjoy reading my blog.
Click to read more from the blog Netizen Kondaba .
Thanks.

Monday, 6 April 2015

The 7 Basic Principles of IT Security

The 7 Basic Principles of IT
Security::-

Takeaway: IT professionals use best practices to
keep corporate, government and other
organizations' systems safe.
Security is a constant worry
when it comes to information
technology . Data theft,
hacking, malware and a host
of other threats are enough
to keep any IT professional
up at night. In this article,
we’ll look at the basic
principles and best practices that IT professionals
use to keep their systems safe.
The Goal of Information Security
Information security follows three overarching
principles:
Confidentiality: This means that information is
only being seen or used by people who are
authorized to access it.
Integrity: This means that any changes to the
information by an unauthorized user are
impossible (or at least detected), and changes
by authorized users are tracked.
Availability: This means that the information is
accessible when authorized users need it.
So, armed with these higher-level principles, IT
security specialists have come up with best
practices to help organizations ensure that their
information stays safe.
IT Security Best Practices
There are many best practices in IT security that
are specific to certain industries or businesses, but
some apply broadly.
1. Balance Protection With Utility
Computers in an office could be completely
protected if all the modems were torn out and
everyone was kicked out of the room - but then
they wouldn’t be of use to anyone. This is why
one of the biggest challenges in IT security is
finding a balance between resource availability
and the confidentiality and integrity of the
resources.
Rather than trying to protect against all kinds
of threats, most IT departments focus on
insulating the most vital systems first and then
finding acceptable ways to protect the rest
without making them useless. Some of the
lower-priority systems may be candidates for
automated analysis, so that the most important
systems remain the focus.
2. Split up the Users and Resources
For an information security system to work, it
must know who is allowed to see and do
particular things. Someone in accounting, for
example, doesn’t need to see all the names in a
client database, but he might need to see the
figures coming out of sales. This means that a
system administrator needs to assign access by
a person’s job type, and may need to further
refine those limits according to organizational
separations. This will ensure that the chief
financial officer will ideally be able to access
more data and resources than a junior
accountant.
That said, rank doesn’t mean full access. A
company's CEO may need to see more data than
other individuals, but he doesn’t automatically
need full access to the system. This brings us to
the next point.
3. Assign Minimum Privileges
An individual should be assigned the minimum
privileges needed to carry out his or her
responsibilities. If a person’s responsibilities
change, so will the privileges. Assigning minimum
privileges reduces the chances that Joe from
design will walk out the door with all the
marketing data.
4. Use Independent Defenses
This is a military principle as much as an IT
security one. Using one really good defense,
such as authentication protocols, is only good
until someone breaches it. When several
independent defenses are employed, an
attacker must use several different strategies
to get through them. Introducing this type of
complexity doesn’t provide 100 percent
protection against attacks, but it does reduce
the chances of a successful attack.
5. Plan for Failure
Planning for failure will help minimize its actual
consequences should it occur. Having backup
systems in place beforehand allows the IT
department to constantly monitor security
measures and react quickly to a breach. If the
breach is not serious, the business or
organization can keep operating on backup while
the problem is addressed. IT security is as much
about limiting the damage from breaches as it
is about preventing them.
6. Record, Record, Record
Ideally, a security system will never be
breached, but when a security breach does take
place, the event should be recorded. In fact, IT
staff often record as much as they can, even
when a breach isn't happening. Sometimes the
causes of breaches aren’t apparent after the
fact, so it's important to have data to track
backwards. Data from breaches will eventually
help to improve the system and prevent future
attacks - even if it doesn’t initially make sense.
7. Run Frequent Tests
Hackers are constantly improving their craft,
which means information security must evolve to
keep up. IT professionals run tests, conduct risk
assessments, reread the disaster recovery plan,
check the business continuity plan in case of
attack, and then do it all over again.
The Takeaway
IT security is a challenging job that requires
attention to detail at the same time as it demands
a higher-level awareness. However, like many tasks
that seem complex at first glance, IT security can
be broken down in to basic steps that can simplify
the process. That’s not to say it makes things
easy, but it does keep IT professionals on their
toes.

 Netizen Kondaba

Sunday, 5 April 2015


 ♡"Welcome Post"♡

            ♡"Welcome Post"♡
》Welcome Friends.......!!

》Welcome to my blog "Netizen Kondaba"

     I'm Kondaba Deshmukh from Maharashtra, India.
Today I have started the blogging.
I want to post or want to give information i have gotten to all of you, B'coz Someone sayed that "The Knowledge increases by giving to Others".
   
     Todays world is the world of Technology, World of digits so in this we all educate through Internet also. B'coz it is fast and interesting medium to learn. Nowdays near about all countries around the world are connected through the Internet. It is powerful invention of this century. So thanks to the ARPA (Advanced Research Projects Agency) to develop the awesome network i.e. Internet.

     By the way this is my first post on the blogger to introduce you about me and my blog (Netizen Rohit).
     "The internet is the first thing that Humanity has built that humanity doesn't understand, The Largest experiment in the anarchy that we have ever Had "

 ☆Netizen means any user of the Internet especially who has habit of using internet.

Thanks...!!
Kondaba Deshmukh
Blog : Netizen Kondaba
Blog Address: kdeshmukh416.blogspot.com