Which of the following store settings that make up baselines?

Some of the most common default configurations found while performing penetration tests against IIS Web servers include debugging utilities and methods, sample files, WebDAV and ISAPI extensions, and internal IP address disclosures. Although these methods, files, and extensions are implemented to provide functionality, security concerns have been identified with some of the components mentioned and they should be implemented only when needed. Disabling unnecessary components can help limit the attacks that can be performed against the IIS implementation.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495516000066

Introduction to IP Network Security

Eric Knipp, ... Edgar DanielyanTechnical Editor, in Managing Cisco Network Security (Second Edition), 2002

Process Application Layer Security

Any vendors software is susceptible to harboring security vulnerabilities. Security can be seen as an arms race, with the bad guys exploiting vulnerabilities and the good guys patching them. Every day, Web sites that track security vulnerabilities, such as CERT, are reporting new vulnerability discoveries in operating systems, application software, server software, and even in security software or devices. Last year, CERT advertised an average of over six vulnerabilities a day. Figure 1.3 shows the increase in reported incidents over the years.

Which of the following store settings that make up baselines?

Figure 1.3. CERT Reporting Statistics

Patches are implemented for these known bugs, but new vulnerability discoveries continue. Sometimes patches fix one bug, only to introduce another. Even open source software that has been widely used for ten years is not immune to harboring serious vulnerabilities. In June 2000, CERT reported that MIT Kerberos had multiple buffer overflow vulnerabilities that could be used to gain root access, and in Feb of 2002, widespread vulnerabilities were announced in the fundamental ASN. 1 encoding schema common to all SNMP agents, allowing the compromise of nearly all infrastructure devices across the Internet.

Many sites do not keep up when it comes to applying patches and so leave their systems with known vulnerabilities. It is important to keep all of your software up to date. Many of the most damaging attacks have occurred in end-user software such as electronic mail clients. Attacks can be directed at any software and can seriously affect your network.

The default configuration of hosts makes them easy to get up and running, but many default services are unnecessary. These unnecessary services increase the vulnerabilities of the system. On each host, all unnecessary services should be shut down. Misconfigured hosts also increase the risk of an unauthorized access. All default passwords and community names must be changed.

Note

The SANS (System Administration, Networking, and Security) Institute in conjunction with the National Infrastructure Protection Center (NIPC) has created a list of the top 20 Internet security threats as determined by a group of security experts. The list is maintained at www.sans.org/top20.htm. This guide is an excellent list of the most urgent and critical vulnerabilities to repair on your systems. Two of the problems listed earlier—unnecessary default services and default passwords—are on this list.

This effort was started because experience has shown that a small number of vulnerabilities are used repeatedly to gain unauthorized access to many systems.

SANS has also published a list of the most common mistakes made by end users, executives, and information technology personnel. It is available at www.sans.org/mistakes.htm.

The increased complexity of systems, the shortage of well-trained administrators, and a lack of resources all contribute to reducing the security of hosts and applications. We cannot depend on hosts to protect themselves from all threats. A useful approach is to use automated scanning devices, such as Cisco Secure Scanner (formerly NetSonar) to help identify the vulnerabilities from a network perspective, and work with the information owner to apply the necessary remediation.

All is not lost, however. Application layer security can provide end-to-end security from an application running on one host through the network to the application on another host. It does not care about the underlying transport mechanism. Complete coverage of security requirements, integrity, confidentiality and non-repudiation, can be provided at this layer. Applications have a fine granularity of control over the nature and content of the transactions. However, application layer security is not a general solution, because each application and client must be adapted to provide the security services. Several examples of application security extensions are described next.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836562500052

Auditing Cisco Routers and Switches

Craig Wright, in The IT Regulatory and Standards Compliance Handbook, 2008

Modifying the nipper.ini File

Figure 10.25 presents the default configuration settings of Nipper. If youi want to modify some parameters, you can do that directly on the nipper.ini file. The following example contains a portion of the configuration settings:

Which of the following store settings that make up baselines?

Figure 10.25. Nipper Config File

# Password / key audit options

Minimum Password Length = 8

Passwords Must Include Uppercase = off

Passwords Must Include Lowercase = off

Passwords Must Include Lowercase or Uppercase = on

Passwords Must Include Numbers = on

Passwords Must Include Special Characters = off

Configuring Nipper can be achieved by modifying the parameters in this file, such as those listed above. For example, the minimum password length can be changed to 10. After modifying the file, save it and run Nipper using the original command line that you used, which is presented again below.

nipper ––ios-router ––input=ABC_Company_Router.txt –– output=ABC_Report.html

The report produced by Nipper is configurable. The images in Figures 10.26 and 10.27 show the default HTML report format for Nipper. Of particular benefit is the inclusion of information about how to fix the problems with detailed descriptions of the risks associated with each misconfiguration. Each of the recommendations also includes the command line or configuration change needed to fix the problem.

Which of the following store settings that make up baselines?

Figure 10.26. Nipper Output File/Report

Which of the following store settings that make up baselines?

Figure 10.27. Nipper Output File/Report Recommendations Section

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492669000102

Microsoft Vista: Wireless World

In Microsoft Vista for IT Security Professionals, 2007

Changing Defaults

Nearly all devices are preconfigured with default configuration settings. These settings are well known to the public and are shared among similar devices. For example, every device has an administrator super-user logon name and password that you use to gain access to and configure your device. These administrator logon names and passwords are not secret; they are available to anyone. Changing the default administrator password on your wireless device will prevent an attacker from attempting to log on and view/change your wireless network settings. The username is often simply the word administrator and the password is typically set to empty (none), or the word administrator, admin, password, or public. The following sidebar provides examples of super-user passwords set by manufacturers by default.

Tools &Traps…

Vendor-Specific Default Superuser Passwords

The following list includes some publicly known administrator accounts from various vendors that are configured on your wireless access point by default:

3Com User: admin; Password: comcomcom

3Com Office Connect Wireless 11 g Cable/DSL User: (none); Password: admin

ACCTON User: none; Password: 0

Actiontec User: admin; Password: password

Advantek Networks User: admin; Password: (none)

Amitech User: admin; Password: admin

Bausch Datacom User: admin; Password: epicrouter

Cisco AP1200 User: Cisco; Password: Cisco

Cisco AP1200 User: root; Password: Cisco

Cisco AP1100 User: (none); Password: Cisco

Cisco WLSE User: root; Password: blender

Cisco WLSE User: wlse; Password: wlsedb

E-Tech User: (none); Password: admin

Intel Wireless AP 2011 User: (none); Password: intel

Intel Wireless Gateway User: intel; Password: intel

Linksys User: admin; Password: admin

Motorola Wireless Router User: admin; Password: Motorola

Topcom User: admin; Password: admin

In addition to changing your default administrative logon name and password, you should disable unwanted services on your access point. Disable unwanted services, such as the Network Time Protocol (NTP), the Cisco Discovery Protocol (CDP), the Hypertext Transfer Protocol (HTTP), Telnet, and the Simple Network Management Protocol (SNMP), if you do not plan to use them. These services, when not disabled, act as doors into your wireless device. An attacker could find and use vulnerabilities in these service ports to gain unprivileged access.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749139650011X

MCSE 70-293: Planning, Implementing, and Maintaining a High-Availability Strategy

Martin Grasdal, ... Dr.Thomas W. ShinderTechnical Editor, in MCSE (Exam 70-293) Study Guide, 2003

Summary of Exam Objectives

Windows Server 2003 often performs reasonably well in its default configuration, but insufficient memory, CPU, disk, or network resources can reduce performance to an unacceptable level. Proper tuning and allocation of these resources will ensure adequate performance. Proper configuration of the server’s page files can improve performance. Regular use of the Disk Defragmenter utility will ensure that your file systems do not become a bottleneck for read and write operations. Use efficient and intelligent network adapters to handle some of the processing load and reduce the overall impact communications have on the system.

The System Monitor utility can be used to monitor various counters present in the system. These counters display real performance information about what is occurring in the system. Some counters can display statistics as percentages, others as cumulative counts of events, and others as immediate absolute values. System Monitor can be used to view current activity in the system or to view data from log files.

A properly developed baseline can help in planning for increased growth and in identifying resources that are being overutilized. A baseline provides a mechanism for identifying what normal operating conditions are for a server. The baseline acts as a reference for troubleshooting performance issues.

The operating system and some applications record events in numerous event log files. The events in these files are always in the same format and can be viewed, searched, and monitored to determine if a system is functioning properly. Entries in the event logs indicate the severity or nature of the events. Security auditing can be enabled and security-related events captured in the event logs. The logs themselves can be archived to create a historical record of a server’s activities.

Backing up data is a must to ensure system availability. Only user accounts with elevated user rights can perform backups or restores. Different methods (normal, differential, and incremental) for performing backups are available to accomplish different objectives. Backups can be performed to tape drives, network shares, or local disks, but not writable or read-writable CD-ROMs or DVD-ROMs.

Some services like DHCP, WINS, and DNS may have special considerations or configuration issues that need to be addressed before backups are performed. Clustered server disks also require special consideration for backups, and the new Volume Shadow Copy feature assists in creating backups of open files.

The Windows Backup Utility can be run either as a Wizard or in Advanced Mode. The Wizard works in most situations and steps you through the process of creating or restoring a backup. The Advanced Mode gives you access to the more powerful options of the utility and lets you fine-tune your backups. The Backup Utility also lets you schedule backup sessions, so that you can create a relatively simple and regular backup process.

The new ASR feature of Windows Server 2003 simplifies the process of re-creating a failed server installation. The ASR process replaces the older ERD process used in previous versions of Windows. Proper planning and preparation must be completed before ASR can be used to restore a system, and performing an ASR restore should be the last resort. An ASR restore requires a floppy diskette drive to be present in the server, but one is not required to create an ASR backup.

The proper use of fault tolerance will mean that services will continue to be provided even when something breaks down. Redundancy in hardware, software, and communications ensures a reliable environment. The use of redundant network interfaces and proxy servers will ensure reliable communications. Using disk RAID arrays for the storage of applications and data will help prevent downtime due to a hard drive failure and may also be used as a performance enhancer. Using redundant components to help cool your server and provide power when the utility power fails will ensure your server operates in adverse conditions.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836937500129

Introducing Big Data Technologies

Krish Krishnan, in Data Warehousing in the Age of Big Data, 2013

Cassandra ring architecture

Figure 4.26 shows the ring architecture we described earlier. In this configuration, we can visualize how Cassandra provides for scalability and consistency.

Which of the following store settings that make up baselines?

Figure 4.26. Cassandra ring architecture.

In the ring architecture, the key is the connector to the different nodes in the ring, and the nodes are replicas. For example, A can be replicated to B and C, when N = 3. And D can be replicated to D and E or D and F when N = 2.

Data placement

Data placement around the ring is not fixed in any default configuration. Cassandra provides two components called snitches and strategies, to determine which nodes will receive copies of data.

Snitches define the proximity of nodes within the ring and provide information on the network topology.

Strategies use the information snitches provide them about node proximity along with an implemented algorithm to collect nodes that will receive writes.

Data partitioning

Data is distributed across the nodes by using partitioners. Since Cassandra is based on a ring topology or architecture, the ring is divided into ranges equal to the number of nodes, where each node can be responsible for one or more ranges of the data. When a node is joined to a ring, a token is issued, and this token determines the node’s position on the ring and assigns the range of data it is responsible for. Once the assignment is done, we cannot undo it without reloading all the data.

Cassandra provides native partitioners and supports any user-defined partitioner. The key feature difference in the native partitioner is the order preservation of keys.

Random partitioner. This is the default choice for Cassandra. It uses an MD5 hash function to map keys into tokens, which will evenly distribute across the clusters. Random partition hashing techniques ensure that when nodes are added to the cluster, the least possible set of data is affected. While the keys are evenly distributed, there is no ordering of the data, which will need the query to be processed by all nodes in an operation.

Ordered preserving partitioners. As the name suggests, preserve the order of the row keys as they are mapped into the token space. Since the key is placed based on an ordered list of values, we can run efficient range-based data retrieval techniques. The biggest drawback in this design is a node with its replicas may become unstable over time, especially with large reads or writes being done in one node.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124058910000040

Training

In Virtualization for Security, 2009

Suggested Vulnerabilities for Windows

The first type of vulnerability we wanted to demonstrate were default configuration issues and poorly chosen passwords. We created a number of users on the test server. Some users had no passwords, and others had simple or predictable passwords. The class was taught how to look up default passwords for various software installations, as well as how to enumerate users on servers which allow it. They were also taught how to look at the password policy to determine if it would be safe to attempt to brute force passwords. Of course on our test server brute forcing was configured to be safe so that we could demonstrate tools to perform such attacks. We also installed MS SQL server, and had the password set to blank as was the default a number of years ago. This allowed the students to both learn how to connect to such a server, as well as how to exploit a database server using SQL commands.

We also made sure that significant information was available using publicly available tools. SNMP community strings were set to public. I believe this server even displayed configuration information using the built in IIS web server with some custom ASP scripts (which had vulnerabilities in them as well).

In addition, we installed some open source software with known exploits. The goal in installing this software was to simulate a real environment which was performing useful functions. The software we chose had a buffer overflow in the portion of the application which collected data from the network. We also chose software that had “non-overflow” vulnerabilities. If a tester issued a properly formatted request, then the tester could retrieve any file on the system.

Finally, the operating system was left unpatched. We had to be careful to keep the firewall rules in place, as putting such a server on the corporate network would have been a violation of the usage guidelines, and likely would have been a victim of the occasional worm outbreaks.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000141

Data-Intensive Computing

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

8.3.2.5 Running the application

Aneka produces a considerable amount of logging information. The default configuration of the logging infrastructure creates a new log file for each activation of the Container process or as soon as the dimension of the log file goes beyond 10 MB. Therefore, by simply continuing to run an Aneka Cloud for a few days, it is quite easy to collect enough data to mine for our sample application. Moreover, this scenario also constitutes a real case study for MapReduce, since one of its most common practical applications is extracting semistructured information from logs and traces of execution.

In the execution of the test, we used a distributed infrastructure consisting of seven worker nodes and one master node interconnected through a LAN. We processed 18 log files of several sizes for a total aggregate size of 122 MB. The execution of the MapReduce job over the collected data produced the results that are stored in the loglevels.txt and components.txt files and represented graphically in Figures 8.12 and 8.13, respectively.

Which of the following store settings that make up baselines?

Figure 8.12. Log-level entries distribution.

Which of the following store settings that make up baselines?

Figure 8.13. Component entries distribution.

The two graphs show that there is a considerable amount of unstructured information in the log files produced by the Container processes. In particular, about 60% of the log content is skipped during the classification. This content is more likely due to the result of stack trace dumps into the log file, which produces—as a result of ERROR and WARN entries—a sequence of lines that are not recognized. Figure 8.13 shows the distribution among the components that use the logging APIs. This distribution is computed over the data recognized as a valid log entry, and the graph shows that just about 20% of these entries have not been recognized by the parser implemented in the map function. We can then infer that the meaningful information extracted from the log analysis constitutes about 32% (80% of 40% of the total lines parsed) of the entire log data.

Despite the simplicity of the parsing function implemented in the map task, this practical example shows how the Aneka MapReduce programming model can be used to easily perform massive data analysis tasks. The purpose of the case study was not to create a very refined parsing function but to demonstrate how to logically and programmatically approach a realistic data analysis case study with MapReduce and how to implement it on top of the Aneka APIs.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000085

Infrastructure and technology

Krish Krishnan, in Building Big Data Applications, 2020

Cassandra ring architecture

Fig. 2.26 shows the ring architecture we described earlier. In this configuration, we can visualize how Cassandra provides for scalability and consistency.

Which of the following store settings that make up baselines?

Figure 2.26. Cassandra ring architecture.

In the ring architecture, the key is the connector to the different nodes in the ring, and the nodes are replicas. For example, A can be replicated to B and C, when N = 3. And D can be replicated to D and E or D and F when N = 2.

Data placement

Data placement around the ring is not fixed in any default configuration. Cassandra provides two components called snitches and strategies to determine which nodes will receive copies of data.

Snitches define proximity of nodes within the ring and provides information on the network topology

Strategies use the information snitches provide them about node proximity along with an implemented algorithm to collect nodes that will receive writes.

Data partitioning

Data is distributed across the nodes by using “partitioners”. Since Cassandra is based on a ring topology or architecture, the ring is divided into ranges equal to the number of nodes, where each node can be responsible for one or more ranges of the data. When a node is joined to a ring, a token is issues and this token determines the node's position on the ring and assigns the range of data it is responsible for. Once the assignment is done, we cannot undo it without reloading all the data.

Cassandra provides native partitioners and supports any user-defined partitioner. The key feature difference in the native partitioner is the order preservation of keys.

Random Partitioner—is the default choice for Cassandra. It uses an MD5 hash function to map keys into tokens, which will evenly distribute across the clusters. Random partitioning hashing techniques ensures that when nodes are added to the cluster, the least possible set of data is affected. While the keys are evenly distributed, there is no ordering of the data, which will need the query to be processed by all nodes in an operation.

Ordered preserving partitioners—as the name suggests, preserves the order of the row keys as they are mapped into the token space. Since the key is placed based on ordered list of values, we can run efficient range–based data retrieval techniques. The biggest drawback in this design is a node with its replicas may become unstable over time especially with large reads or writes being done in one node.

Peer to peer—simple scalability

Cassandra by design is a peer to peer model of architecture, meaning in its configuration there are no designated master or slave nodes. The simplicity of this design allows nodes to be taken down from a cluster or added to a cluster with ease. When a node is down, the processing is taken over by the replicas and allows for a graceful shutdown, similarly when a node is added to a cluster, upon being designated with its keys and tokens, the node will join the cluster and understand the topology before commencing operations.

Gossip protocol—node management

In Cassandra architecture, to manage partition tolerance and decentralization of data, managing intranode communication becomes a key feature. This is accomplished by using the gossip protocol. Alan Demers, a researcher at Xerox's Palo Alto Research Center, who was studying ways to route information through unreliable networks, originally coined the term “gossip protocol” in 1987.

In Cassandra, the gossip protocol is implemented as gossiper class. When a node is added to the cluster it also registers with the gossiper to receive communication. The gossiper selects a random node and checks it for being alive or dead, by sending messages to the node. If a node is found to be unresponsive, the gossiper class triggers the “hinted handoff” process if configured. In order for the gossiper class to distinguish between failure detection and long running transactions, Cassandra implements another algorithm called “Phi Accrual Failure Detection algorithm” (based on the popular paper by Naohiro Hayashibara et al.). According to the “accrual detection” algorithm, a node can be marked as suspicious based on the time it takes to respond and more the delays, higher the suspicion that the node is dead. This delay or accrued value is calculated by the Phi and compared to a threshold, which will be used by the gossiper to determine the state of the node. The implementation is accomplished by the “failuredetector” class, which has three methods:

isAlive(node_address)—What the detector will report about a given node's aliveness.

interpret(node_address)—this method is used by the gossiper to make a decision on the health of the node, based on the suspicion level reached by calculating Phi (the accrued value of the state of responsiveness)

report(node_address)—When a node receives a heartbeat, it invokes this method.

With the Peer to Peer and gossip protocols implementation, we can see how the Cassandra architecture keeps the nodes synced and the operations on the nodes scalable and reliable. This model is derived and enhanced from Amazon's Dynamo paper. Based on the discussion of Cassandra so far, we can see how the integration of two architectures from Bigtable and Dynamo has created a row-oriented column-store, that can scale and sustain performance. At this time of writing Cassandra is a top level project in Apache. Facebook has already moved on to proprietary techniques for large-scale data management, but there are several large and well-known companies that have adopted and implemented Cassandra for their architectural needs of large data management especially on the web, with continuous customer or user interactions.

There are a lot more details on implementing Cassandra and performance tuning, which will be covered in the latter half of this book when we discuss the implementation and integration architectures.

Basho Riak

Riak is a document oriented database. It is similar in architecture to Cassandra, and the default is setup as a four-node cluster. It follows the same ring topology and gossip protocols in the underpinning architecture. Each of the four nodes contains eight nodes or eight rings, thus providing a 32 ring partition for use. A process called vnodes(virtual nodes) manages the partitions across the 4 node cluster. Riak uses a language called erlang and MapReduce. Another interesting feature of Riak is concept of links and link walking. Links enable you to create metadata to connect objects. Once you create links, you can traverse the objects and this is the process of link walking. The flexibility of links allows you to determine dynamically how to connect multiple objects. More information on Riak is available at Basho's (the company that designed and developed Riak)website.

Other popular NoSQL implementations are document databases (CouchBase, MongoDB, and other) and Graph Databases (Neo4j). Let us understand the premise behind the document database and graph database architectures.

Document oriented databases or document database can be defined as a schema less and flexible model of storing data as documents, rather than relational structures. The document will contain all the data it needs to answer specific query questions. Benefits of this model include the following:

Which of the following describes a configuration baseline?

Which of the following describes a configuration baseline? EXPLANATION A configuration baseline is a set of consistent requirements for a workstation or server.

Which Microsoft tool analyzes a computer's settings and compares its configuration to a baseline?

The Security Compliance Toolkit (SCT) is a set of tools that allows enterprise security administrators to download, analyze, test, edit, and store Microsoft-recommended security configuration baselines for Windows and other Microsoft products.

Which of the following are performed by the Microsoft Baseline Security Analyzer?

MBSA performs the following actions during a scan: Checks for available updates to the operating system, Microsoft Data Access Components (MDAC), MSXML (Microsoft XML Parser), . NET Framework, and SQL Server. Scans a computer for insecure configuration settings.

What is configuration baseline quizlet?

A configuration baseline is a set of consistent requirements for a workstation or server. A security baseline is a component of the configuration baseline that ensures that all workstations and servers comply with the security goals of the organization.