SSL Vulnerable to BEAST attack

A vulnerability exists in SSL 3.0 and TLS 1.0 that could allow information disclosure if an attacker intercepts encrypted traffic served from an affected system. The weakness is down to insufficiently randomised data being used for the Initialisation Vectors (IV) within the CBC-mode encryption algorithms. The exploit can be performed through multiple injection points, both native to the browser’s functionality.

This issue is reduced in risk by the fact that the attacker is required to have a Man-in-the-Middle position for this exploit whereby traffic interception can be performed, and with this position obtained it is generally easier to attack the victim through other methods (SSL-stripping, mixed-scripting [requesting HTTP resources from an HTTPS connection], etc…) which do not require complex cryptanalysis such as BEAST to execute.

TestSSLServer.exe (43.00 kb)

http://www.bolet.org/TestSSLServer/

 

Supported versions:
 SSLv2 SSLv3 TLSv1.0
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv2
     RC4_128_WITH_MD5
     DES_192_EDE3_CBC_WITH_MD5
  SSLv3
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
  TLSv1.0
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_256_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
----------------------
Server certificate(s):
  0edc8b5e2d1e4c803319c3e4e80dd9945d953db2: CN=application.local.someone.zone
----------------------
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: vulnerable
CRIME status: protected

Recommendation:

Client-side:

Various browsers are introducing or have introduced mitigations for the issue which make exploitation less likely. There are also steps which can be taken on the server side to make exploitation impossible.

Server-side:

Enabling and prioritising TLS 1.1/1.2 would be advised where possible although removing support for TLS1.0 is impractical at this time.

In the short-term due to the lack of wide-scale support by browsers and servers alike, prioritising the use of a stream cipher (such as RC4-SHA) instead of a CBC-mode cipher is recommended in order to maintain compatibility with browsers (see note).

Migration away from TLS 1.0 and below to TLS 1.1/1.2 should considered as a medium-term option for secure applications.

SSL Best Practice Guide:

https://www.ssllabs.com/projects/best-practices/index.html

BEAST attack

On September 23, 2011 researchers Thai Duong and Juliano Rizzo demonstrated a "proof of concept" called BEAST ("Browser Exploit Against SSL/TLS") using a Java applet to violate same origin policy constraints, for a long-known Cipher block chaining (CBC) vulnerability in TLS 1.0.[44][45] Practical exploits had not been previously demonstrated for this vulnerability, which was originally discovered by Phillip Rogaway[46] in 2002. The vulnerability of the attack had been fixed with TLS 1.1 in 2006, but TLS 1.1 had not seen wide adoption prior to this attack demonstration.

Mozilla updated the development versions of their NSS libraries to mitigate BEAST-like attacks. NSS is used by Mozilla Firefox and Google Chrome to implement SSL. Some web servers that have a broken implementation of the SSL specification may stop working as a result.

Microsoft released Security Bulletin MS12-006 on January 10, 2012, which fixed the BEAST vulnerability by changing the way that the Windows Secure Channel (SChannel) component transmits encrypted network packets.

Users of Windows 7 and Windows Server 2008 R2 can enable use of TLS 1.1 and 1.2, but this work-around will fail if it is not supported by the other end of the connection and will result in a fall-back to TLS 1.0.

Verbose Error Messages

The software generates an error message that includes sensitive information about its environment, users, or associated data. 

Extended Description

The sensitive information may be valuable information on its own (such as a password), or it may be useful for launching other, more deadly attacks. If an attack fails, an attacker may use error information provided by the server to launch another more focused attack. For example, an attempt to exploit a path traversal weakness might yield the full pathname of the installed application. In turn, this could be used to select the proper number of ".." sequences to navigate to the targeted file. An attack using SQL injection might not initially succeed, but an error message could reveal the malformed query, which would expose query logic and possibly even passwords or other sensitive information used within the query.

Recommendation

Suppress these error messages by ensuring that a customised error handler is called in the event of an error. This can produce a generic message which does not hint at the underlying cause of the exception.

References

OWASP – Error Handling
http://www.owasp.org/index.php/Improper_Error_Handling

How to: Display Safe Error Messages

When your application displays error messages, it should not give away information that a malicious user might find helpful in attacking your system. For example, if your application unsuccessfully tries to log in to a database, it should not display an error message that includes the user name it is using.

There are a number of ways to control error messages, including the following:

Configure the application not to show verbose error messages to remote users. (Remote users are those who request pages while not working on the Web server computer.) You can optionally redirect errors to an application page.

Include error handling whenever practical and construct your own error messages. In your error handler, you can test to see whether the user is local and react accordingly.

Create a global error handler at the page or application level that catches all unhandled exceptions and routes them to a generic error page. That way, even if you did not anticipate a problem, at least users will not see an exception page.

To configure the application to turn off errors for remote users

In the Web.config file for your application, make the following changes to the customErrors element:

  • Set the mode attribute to RemoteOnly (case-sensitive). This configures the application to show detailed errors only to local users (that is, to you, the developer).
  • Optionally include a defaultRedirect attribute that points to an application error page.
  • Optionally include <error> elements that redirect specific errors to specific pages. For example, you can redirect standard 404 errors (page not found) to your own application page.

The following code example shows a typical customErrors block in the Web.config file.

<customErrors mode="RemoteOnly" defaultRedirect="AppErrors.aspx">
   <error statusCode="404" redirect="NoSuchPage.aspx"/>
   <error statusCode="403" redirect="NoAccessAllowed.aspx"/>
</customErrors>

Environment issue - Untrusted Certificate

Certificate is singed by an unrecognised certificate authority.  If a browser receives a self-signed certificate, it pops up a warning, and the burden falls to the user to confirm the identity.  Pushing this decision to the user is ultimately what opens up the possibility of a man-in-the-middle (MITM) attack.  The security issue is not with self-signed certificates, but with the way users interact with them in the browser.

Recommendation:


Purchase or generate a proper certificate for this service.

Check out, for Mand in the middle

http://www.schneier.com/blog/archives/2010/04/man-in-the-midd_2.html

You can purchase SSL certificates from Symantec

https://www.symantec.com/en/uk/verisign/ssl-certificates/secure-site

Denial Of Service (DoS) attacks via SQL Wildcards should be prevented

SQL Wildcard attacks force the underlying database to carry out CPU-intensive queries by using several wildcards. This vulnerability generally exists in search functionalities of web applications. Successful exploitation of this attack will cause Denial of Service (DoS).

Depending on the connection pooling settings of the application and the time taken for attack query to execute, an attacker might be able to consume all connections in the connection pool, which will cause database queries to fail for legitimate users.

By default in ASP.NET, the maximum allowed connections in the pool is 100 and timeout is 30 seconds. Thus if an attacker can run 100 multiple queries with 30+ seconds execution time within 30 seconds no one else would be able to use the database related parts of the application.

Recommendation

If the application does not require this sort of advanced search, all wildcards should be escaped or filtered.

References:

OWASP Testing for SQL Wildcard Attacks

https://www.owasp.org/index.php/Testing_for_SQL_Wildcard_Attacks_(OWASP-DS-001)

DoS Attacks using SQL Wildcards

http://www.zdnet.com/blog/security/dos-attacks-using-sql-wildcards-revealed/1134

Brief Summary

SQL Wildcard Attacks are about forcing the underlying database to carry out CPU-intensive queries by using several wildcards. This vulnerability generally exists in search functionalities of web applications. Successful exploitation of this attack will cause Denial of Service.

Description of the Issue

SQL Wildcard attacks might affect all database back-ends but mainly affect SQL Server because the MS SQL Server LIKE operator supports extra wildcards such as "[]","[^]","_" and "%".

In a typical web application, if you were to enter "foo" into the search box, the resulting SQL query might be:

SELECT * FROM Article WHERE Content LIKE '%foo%'

In a decent database with 1-100000 records the query above will take less than a second. The following query, in the very same database, will take about 6 seconds with only 2600 records.

SELECT TOP 10 * FROM Article WHERE Content LIKE '%_[^!_%/%a?F%_D)_(F%)_%([)({}%){()}£$&N%_)$*£()$*R"_)][%](%[x])%a][$*"£$-9]_%'

So, if the tester wanted to tie up the CPU for 6 seconds they would enter the following to the search box:

_[^!_%/%a?F%_D)_(F%)_%([)({}%){()}£$&N%_)$*£()$*R"_)][%](%[x])%a][$*"£$-9]_

Black Box testing and example

Testing for SQL Wildcard Attacks:

Craft a query which will not return a result and includes several wildcards. You can use one of the example inputs below.

Send this data through the search feature of the application. If the application takes more time generating the result set than a usual search would take, it is vulnerable.

Example Attack Inputs to send

    '%_[^!_%/%a?F%_D)_(F%)_%([)({}%){()}£$&N%_)$*£()$*R"_)][%](%[x])%a][$*"£$-9]_%'
    '%64_[^!_%65/%aa?F%64_D)_(F%64)_%36([)({}%33){()}£$&N%55_)$*£()$*R"_)][%55](%66[x])%ba][$*"£$-9]_%54' bypasses modsecurity
    _[r/a)_ _(r/b)_ _(r-d)_
    %n[^n]y[^j]l[^k]d[^l]h[^z]t[^k]b[^q]t[^q][^n]!%
    %_[aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa[! -z]@$!_%

...

Result Expected

If the application is vulnerable, the response time should be longer than usual.

SQL Wildcard attacks force the underlying database to carry out CPU-intensive queries by using several wildcards. This vulnerability generally exists in search functionalities of web applications. Successful exploitation of this attack will cause Denial of Service.

Here, it was possible to search by keying in a value for the forename/firstname field as M__E (i.e. M underscore underscore E) and the search returned results treating the two underscores as wildcards for any two characters.

This may still exist on the current code base so it's a case of going through the forms and ensuring that it doesn't occur / fix it if the vulnerability exists.

DoS Attacks Using SQL Wiltcards.pdf (567.23 kb)

Check the solution I have produced to prevent this Removing and Cleaning search content to prevent DoS attacks

HTTP Header Disclosure

Vulnerability overview/description

Due to unsanitized user input it is possible to inject arbitrary HTTP header values in certain HTTP responses of the Satellite Server. This can be exploited, for example, to perform session fixation and malicious redirection attacks via the Set-Cookie and the Refresh headers. Moreover, the Satellite Server caches these HTTP responses with the injected HTTP header resulting in all further requests to the same resource being served with the poisoned HTTP response, while these objects remain in cache.

Information Disclosure

Information disclosure enables an attacker to gain valuable information about a system. Therefore, always consider what information you are revealing and whether it can be used by a malicious user. The following lists possible information disclosure attacks and provides mitigations for each. 

Message Security and HTTP

If you are using message-level security over an HTTP transport layer, be aware that message-level security does not protect HTTP headers. The only way to protect HTTP headers is to use HTTPS transport instead of HTTP. HTTPS transport causes the entire message, including the HTTP headers, to be encrypted using the Secure Sockets Layer (SSL) protocol.

http://msdn.microsoft.com/en-us/library/aa738441.aspx 

Environment change

The web application returned information about itself in the HTTP header that could aid an attacker.  Default web server installations often include the vendor and version details of the web application, and possibly further information about scripting services also installed

User Enumeration - Login failure messages shouldn't give out any information that results in vulnerabilities.

Is it possible to enumerate user account details within the Web application via the logon page?

Where an application requires account details to retrieve other information, it may be possible to enumerate the details based on the error message returned by the application.

In this case it was also possible to determine the state of the user account

Recommendation:

Messages which allow an attacker to enumerate account details should be removed. A generic error message which does not disclose information about account information should be used.

References:

OWASP Testing for user enumeration

https://www.owasp.org/index.php/Testing_for_user_enumeration_(OWASP-AT-002)

Enumeration

  • Enumeration is the first attack on target network, enumeration is the process to gather the information about a target machine by actively connecting to it.
  • Enumeration means to identify the user account, system account and admin account. Enumerating windows active directory to find out these stuffs.
  • Discovering NetBIOS name enumeration with NBTscan.
  • Establishing null sessions and connections. Null sessions tools like Dumpsec, Winfo and Sid2User or more, may used to perform this attack.

The login failure messages can give away too much information and it would be possible to enumerate user details via the log-on page. It reports on whether an account exists/not and if an account is locked out or not etc.

There should ideally be just one generic message that's of no use to a potential hacker.

- See more at: http://www.ehacking.net/2011/04/scanning-and-enumeration-second-step-of.html#sthash.B3KW4Kcs.dpuf

Other Considerations

You'll need to log somewhere in your application the actual reasons why the user could not login, this could be that the account is locked, suspened, deleted etc. 

By logging this information a system adminstrator or the like can see why a user could not get access to the application through the login page.

PenTest where to start

I would like to secure up an MVC application, and one way of insuring a secure application is to run through pentesting, Penetration Testing, but what is pentesting?

Wikipedia

A penetration test, occasionally pentest, is a method of evaluating computer and network security by simulating an attack on a computer system or network from external and internal threats. The process involves an active analysis of the system for any potential vulnerabilities that could result from poor or improper system configuration, both known and unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. This analysis is carried out from the position of a potential attacker and can involve active exploitation of security vulnerabilities.

Pentests should performed be by someone who has no involvement in the application lifecycle process, someone or some group of people who are independent and will try and penetrate the application.  They will uncover security issues through penetration tests which are presented to the system's owner. Effective penetration tests will couple this information with an accurate assessment of the potential impacts to the application and outline a range of technical and procedural countermeasures to reduce risks.

Penetration tests are valuable for several reasons:

  1. Determining the feasibility of a particular set of attack vectors
  2. Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence
  3. Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software
  4. Assessing the magnitude of potential business and operational impacts of successful attacks
  5. Testing the ability of network defenders to successfully detect and respond to the attacks
  6. Providing evidence to support increased investments in security personnel and technology

In a series of blogs I will be going over everything that I find and document down how to overcome such vulnerabilities.

No Caching in any Browser

I need to stop caching across all browsers of my website for security reasons.  This has been driving me nuts over the past two weeks, as every method I try just keep allow some caching to be stored.  Easiest way to check is by pressing the back key on the browsers window as the results should update.

I finally found a solution

using HTML:

<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />

In ASP.NET

Response.AppendHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
Response.AppendHeader("Pragma", "no-cache");
Response.AppendHeader("Expires", "0"); 

The Cache-Control is per the HTTP 1.1 spec for clients (and implicitly required by some browsers next to Expires), the Pragma is per the HTTP 1.0 spec for clients and proxies and Expires is per the HTTP 1.1 spec for clients and proxies. Other Cache-Control parameters are irrelevant if the above mentioned three are specified. The Last-Modified header as included in most other answers here is only if you actually want to cache the request, so you don't need to specify it at all.

Note that when the page is served over HTTP and a header is present in both the HTTP response headers and the HTML meta tags, then the one specified in the response header will get precedence over the HTML meta tag. The HTML meta tag will only be used when the page is viewed from local disk file system. See also W3 HTML spec chapter 5.2.2. Take care with this when you don't specify them programmatically, because the web server can namely include some default values. To verify the one and other, you can see/debug them using Firebug Net panel.

Singleton Pattern the right way

The singleton pattern is used in almost all modern day programming languages, so why do I keep finding it written incorrectly in so many applications, so lets start with the right way

public sealed class SimpleNoLockLazy
    {
        static readonly SimpleNoLockLazy instance = new SimpleNoLockLazy();

        // Explicit static constructor to tell C# compiler
        // not to mark type as beforefieldinit
        static SimpleNoLockLazy()
        {
        }

        SimpleNoLockLazy()
        {
        }

        public static SimpleNoLockLazy Instance
        {
            get
            {
                return instance;
            }
        }
    }

So why this implementation?   The reason for using a Singleton is for performance in a multi-threaded environment and this pattern is not only simple, but it is the fastest.  I have attached a benchmark application, which is based on Jon Skeet's benchmark, but testing using a Parallel processes.

Benchmark.zip (9.98 kb)

I found Jon Skeet's article very useful

Validation from WCF layer through to MVC

 

First off, why would you want to perform validation in the WCF layer?

After a vigorous PEN test, it was noted that the validation was mainly happening at the client browser and the application required validation to occur at the business logic layer, in our case the WCF layer.

In general, when working with an MVC/ASP.NET web application, you would typically want to do validation on the client-side as well as on the server side. Whilst the custom validation is simple enough, you'd have to duplicate it on the client and server, which is annoying - now you have two places to maintain a single validation routine.

Lets look at different validation options that will assist us in solving the issues over validation.

There are five validation approaches which you can prefer during validations. Each one has advantages and disadvantages over each other. Also, it is possible to apply multiple approaches at the same time. For example, you can implement self validation and data annotation attributes approaches at the same time, which gives you much flexibility.

  1. Rule sets in configuration
  2. Validation block attributes
  3. Data annotation attributes
  4. Self-validation
  5. Validators created programmatically

Rule sets in Configuration

In this approach, we put our validation rules into the configuration file (web.config in ASP.NET and app.config in Windows applications). Here is an example showing how to define validation rules:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="validation" type="Microsoft.Practices.EnterpriseLibrary.
	Validation.Configuration.ValidationSettings, 
	Microsoft.Practices.EnterpriseLibrary.Validation, Version=5.0.414.0, 
	Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" />
  </configSections>
  <validation>
    <type name="ELValidation.Entities.BasicCustomer" 
		defaultRuleset="BasicCustomerValidationRules"
      assemblyName="ELValidation, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">
      <ruleset name="BasicCustomerValidationRules">
        <properties>
          <property name="CustomerNo">
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.NotNullValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              negated="false" messageTemplate="Customer must have valid no"
              tag="CustomerNo" name="Not Null Validator" />
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.StringLengthValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              upperBound="5" lowerBound="5" lowerBoundType="Inclusive" 
		upperBoundType="Inclusive"
              negated="false" messageTemplate="Customer no must have {3} characters."
              tag="CustomerNo" name="String Length Validator" />
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.RegexValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              pattern="[A-Z]{2}[0-9]{3}" options="None" patternResourceName=""
              patternResourceType="" 
		messageTemplate="Customer no must be 2 capital letters and 3 numbers."
              messageTemplateResourceName="" messageTemplateResourceType=""
              tag="CustomerNo" name="Regex Validator" />
          </property>
        </properties>
      </ruleset>
    </type>
  </validation>
</configuration>

Validation Block Attributes

In this approach, we define our validations through the attributes defined in Enterprise Library validation block.

[NotNullValidator(MessageTemplate = "Customer must have valid no")]
[StringLengthValidator(5, RangeBoundaryType.Inclusive, 
		5, RangeBoundaryType.Inclusive, 
		MessageTemplate = "Customer no must have {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{3}", 
	MessageTemplate = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }

Message template is a good way to provide meaningful messages on failure with the flexibility to be replaced by Enterprise Library validation block for brackets.

Data Annotation Attributes

In this approach, we define our validations through the attributes defined within System.ComponentModel.DataAnnotations assembly.

[Required(ErrorMessage = "Customer no can not be empty")]
[StringLength(5, ErrorMessage = "Customer no must be 5 characters.")]
[RegularExpression("[A-Z]{2}[0-9]{3}", 
	ErrorMessage = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }

This approach is widely used in conjunction with Entity Framework, MVC and ASP.NET validations.

Self-validation

This approach gives much flexibility to us in order to create and execute complex validation rules.

In order to implement this approach, we first decorate HasSelfValidation attribute with the object type as shown in the following example:

[HasSelfValidation]
public class AttributeCustomer
{
    …
}

 

Then, we write our validation logic by putting SelfValidation attribute on the top of the method which executes the validations.

[SelfValidation]
public void Validate(ValidationResults validationResults)
{
    var age = DateTime.Now.Year - DateTime.Parse(BirthDate).Year;

    // Due to laws, only customers older than 18 can be registered 
    // to system and allowed to order products
    if (age < 18)
    {
        validationResults.AddResult(
            new ValidationResult("Customer must be older than 18",
                this,
                "BirthDate",
                null,
                null));
    }
}

Validators Created Programmatically

This approach is different from the others because validation rules are created programmatically and executed independent of the type.

First, we define our validation rules:

Validator[] validators = new Validator[] 
{ 
    new NotNullValidator(false, "Value can not be NULL."),
    new StringLengthValidator(5, RangeBoundaryType.Inclusive, 
	5, RangeBoundaryType.Inclusive,  "Value must be between {3} and {5} chars.")
};

Then, we add them into one of the composite validators depending on your preference.

var validator = new AndCompositeValidator(validators);

In this example, we check Value to be tested… if it is not null and it has five exact characters.

Finally, I want to mention about the validations against collections. Actually, it is similar to the validations for objects.

// Initialize our object and set the values
var customer = new AttributeCustomer();
            
FillCustomerInfo(customer);

// Create a list of objects and add the objects to be tested to the list
List<AttributeCustomer> customers = new List<AttributeCustomer>();
customers.Add(customer);

// Initialize our validator by providing the type of objects in the list and validate them
Validator cusValidator = new ObjectCollectionValidator(typeof(AttributeCustomer));
ValidationResults valResults = cusValidator.Validate(customers);

// Show our validation results
ShowResults(valResults);

 

First thought was to use the MVC answer to validation - DataAnnotations, so why can't we do this?

WCF is technology for exposing services and it does it in interoperable way. Data contract exposed by the service is just create for data. It doesn't matter how many fancy attributes you use on the contract or how many custom logic you put inside get and set methods of the property. On the client side you always see just the properties.

The reason for this is that once you expose the service it exposes all its contracts in interoperable way - service and operation contracts are described by WSDL and data contracts are described by XSD. XSD can describe only structure of data but no logic. The validation itself can be in some limited way be described in XSD but .NET XSD generator doesn't do this. Once you add service reference to your WCF service the proxy generator will take WSDL and XSD as a source and create your classes again without all that attributes.

If you want to have client side validation you should in the first place implement that validation on the client side - it can be done by using buddy classes for partial classes used by WCF proxy. If you want to have a maintenance nightmare and you don't want to use this way you must share assembly with your entities between your WCF client and WCF service and reuse those types when adding service reference. This will create tight coupling between your service and ASP.NET MVC application.

What about Microsoft.Practices.EnterpriseLibrary.Validation.Validators?

This is a possible solution to the WCF layer, but what this does not provide is a process to pass the validation back to the UI for JavaScript validation, which you get from DataAnnotations.

It may be worth using both the DataAnnotations and the Enterprise Library validation block together.

One question still to remain, is if the validation fails in the Services layer (WCF) how does the validation message get passed to the calling WCF client?

In general, when working with an MVC/ASP.NET web application, you would typically want to do validation on the client-side as well as on the server side. Whilst the custom validation is simple enough, you'd have to duplicate it on the client and server, which is annoying - now you have two places to maintain a single validation routine.

So what else can we use?

First of all, do not throw Exceptions as a way of validating data - that's a way too expensive operation, instead of graceful handling of invalid data.

If you would like to see some sample code take a look at Microsoft Enterprise Library 5.0 - Introduction to Validation Block by Ercan Anlama

ELValidation_src.zip (772.17 kb)

 

We've looked at all possible validation options, now the searching question is how do pass the validation over WCF to the WCF client for validation?  to be continued......

About the author

You have probably figured out by now that my name is Bryan Avery (if not, please refer to your browser's address field).  Technology is more than a career to me - it is both a hobby and a passion.  I'm an ASP.NET/C# Developer at heart...

Month List