Monday, December 13, 2010

Three-Tier Architecture Vs MVC



Three-Tier Architecture

Three-tier architecture is perfectly good for describing the overall design of a software product, but it doesn’t address what happens inside the UI layer. That’s not very helpful when, as in many projects, the UI component tends to balloon to a vast size, amassing logic like a great rolling snowball.

It shouldn’t happen, but it does, because it’s quicker and easier to attach behaviors directly to an event handler (a la Smart UI) than it is to refactor the domain model. When the UI layer is directly coupled to your GUI platform (Windows Forms, Web Forms), it’s almost impossible to set up any automated tests on it, so all that sneaky new code escapes any kind of rigor. Three-tier’s failure to enforce discipline in the UI layer means, in the worst case, that you can end up with a Smart UI application with a feeble parody of a domain model stuck on its side.

MVC Architecture

In this architecture, requests are routed to a controller class, which processes user input and works with the domain model to handle the request. While the domain model holds domain logic (i.e., business objects and rules), controllers hold application logic, such as navigation through a multistep process or technical details like authentication.

When it’s time to produce a visible UI for the user, the controller prepares the data to be displayed (the presentation model, or ViewData in ASP.NET MVC, which for example might be a list of Product objects matching the requested category), selects a view, and leaves it to complete the job. Since controller classes aren’t coupled to the UI technology (HTML), they are just pure application logic. You can write unit tests for them if you want to. Views are simple templates for converting the view model into a finished piece of HTML. They are allowed to contain basic, presentation-only logic, such as the ability to iterate over a list of objects to produce an HTML table row for each object, or the ability to hide or show a section of the page according to a flag on some object in the view model, but nothing more complicated than that. By keeping them simple, you’ll truly have the benefit of separating application logic concerns from presentation logic concerns.

Separating Out the Domain Model

Given the limitations of Smart UI architecture, there’s a widely accepted improvement that yields huge benefits for an application’s stability and maintainability.

By identifying the real-world entities, operations, and rules that exist in the industry or subject matter you’re targeting (the domain), and by creating a representation of that domain in software (usually an object-oriented representation backed by some kind of persistent storage system, such as a relational database or a document database), you’re creating a domain model.

What are the benefits of doing this?

• (Easy To Maintain) First, it’s a natural place to put business rules and other domain logic, so that no matter what particular UI code performs an operation on the domain (e.g., “open
a new bank account”), the same business processes occur.

• (No Source Code Duplications) Second, it gives you an obvious way to store and retrieve the state of your application’s universe at the current point in time, without duplicating that
persistence code everywhere.

• Third, you can design and structure the domain model’s classes and
inheritance graph according to the same terminology and language used by
experts in your domain, permitting a ubiquitous language shared by your programmers and business experts, improving communication and increasing the chance that you deliver what the customer actually wants (e.g., programmers working on an accounting package may never actually understand what an accrual is unless their code uses the same terminology).

In a .NET application, it makes sense to keep a domain model in a separate assembly (i.e., a C# class library project—or several of them) so that you’re constantly reminded of the distinction between domain model and application UI. You would have a reference from the UI project to the domain model project, but no reference in the opposite direction, because the domain model shouldn’t know or care about the implementation of any UI that relies on it.
For example, if you send a badly formed record to the domain model, it should return a data structure of validation errors, but would not attempt to display those errors on the screen in any way (that’s the UI’s job).

Wednesday, December 1, 2010

Can you find System.Web in add reference ? [.NET 4.0]

Today I was tried to use the method in "System.Web.HttpUtility.UrlEncode", but intelligence only shows three classes under System.Web.

After goggling I found that I need to add a .NET reference to "System.Web.dll" because I am making a Windows Application.

But unfortunately I could not find where is the "System.Web.dll" ? Its not available in the .NET references . :(

Finally figure out why is that! Yeah its because, I have targeted the Windows application to .NET framework4.0

But that's no an excuse I know;

Meanwhile I did further investigation of why it was not showing in the add reference tab of 4.0 project.

Yeah I got the issue, this is because, by default the project created in Framework 4.0 is defaulted to the profile.

Open the project properties and you can see it as shown below.

We can change this profile to .Net Framework 4.0 as shown below.

Once you have done this, you can go and add the reference to System.Web

Hope this helps!!!






Tuesday, November 30, 2010

What’s New in ASP.NET MVC 2

Since ASP.NET MVC 1 reached its final release in April 2009, the developer community has been hard at work applying it to every conceivable task (and a few inconceivable ones). Through experience, we’ve established best practices, new design patterns, and new libraries and tools to make ASP.NET MVC development more successful.
Microsoft has watched closely and has responded by embedding many of the community’s ideas into ASP.NET MVC 2. Plus, Microsoft noticed that certain common web development tasks were harder than expected in ASP.NET MVC 1, so it has invented new infrastructure to simplify these tasks.

Altogether, the new features in ASP.NET MVC 2 are grouped around the theme of streamlining
“enterprise-grade” web development. Here’s a rundown of what’s new:

• Areas give you a way to split up a large application into smaller sections (e.g., having a public area, an administrative area, and a reporting area). Each area is a separate package of controllers, views, and routing configuration entries, making them convenient to develop independently and even reuse between projects.

• Model metadata and templated view helpers are extensible mechanisms for describing the meaning of your data model objects (e.g., providing humanreadable descriptions of their properties) and then automatically generating sections of UI based on this metadata and your own design conventions.

• Validation is now far more sophisticated. Your model metadata can specify validation rules using declarative attributes (e.g., [Required]) or custom validators, and then the framework will apply these rules against all incoming data. It can also use the same metadata to generate JavaScript for client-side validation.

• Automatic HTML encoding (supported on .NET 4 only) means you can avoid cross-site scripting (XSS) vulnerabilities without remembering whether or not to HTML-encode each output. It knows whether you’re calling a trusted HTML helper, and will make the right encoding choice automatically.

• Asynchronous controllers are relevant if you must handle very large volumes of concurrent requests that each wait for external input/output operations (e.g., database or web service calls). These build on ASP.NET’s underlying IHttpAsyncHandler API, potentially boosting performance in such scenarios.

• HTTP method overriding is very neat if you’re exposing a REST-style interface to the Web with the full range of HTTP verbs such as PUT and DELETE. Clients that can’t issue these HTTP request types can now specify an override parameter, and then the framework will transparently accept that as the request’s HTTP verb.

• Strongly typed input helpers let you map input controls (e.g., text boxes or custom
templates) directly to your model objects’ properties with full IntelliSense and refactoring support.

• Child requests are a way to inject multiple extra independent sections into a page (e.g., a navigation menu or a “latest posts” list)—something that doesn’t otherwise fit easily into the MVC pattern. This is based on the RenderAction() mechanism previously included in the “MVC Futures” add-on for ASP.NET MVC 1.

Like any other version 2 product, there’s also a host of smaller improvements, including extra
extensibility options and performance optimizations. I will explain all the above areas one by one from my furture posts. Times up now!! Need to go to office.. catch u guys !! :)

ASP.NET Web Forms Vs ASP.NET MVC

You’ve already heard about the weaknesses and limitations in traditional ASP.NET Web Forms from my previous post (http://upulsgamage.blogspot.com/2010/11/whats-wrong-with-aspnet-web-forms.html). That doesn’t mean that Web Forms is dead, though;

Microsoft is keen to remind everyone that the two platforms go forward side by side, equally supported, and both are subject to active, ongoing development. In many ways, your choice between the two is a matter of development philosophy.

• Web Forms takes the view that UIs should be stateful, and to that end adds a
sophisticated abstraction layer on top of HTTP and HTML, using ViewState and postbacks to create the effect of statefulness. This makes it suitable for
drag-anddrop Windows Forms– style development, in which you pull UI widgets onto a canvas and fill in code for their event handlers.

• MVC embraces HTTP’s true stateless nature, working with it rather than
fighting against it. It requires you to understand how web applications actually work; but given that understanding, it provides a simple, powerful, and modern approach to writing web applications with tidy code that’s easier to extend and maintain over time, free of bizarre complications and painful limitations.

There are certainly cases where Web Forms is at least as good as, and probably better than, MVC. The obvious example is small, intranet-type applications that are largely about binding grids directly to database tables or stepping users through a wizard. Since you don’t need to worry about the bandwidth issues that come with ViewState, don’t need to be concerned with search engine optimization, and aren’t bothered about unit testing or long-term maintenance, Web Forms’ drag-and-drop development strengths outweigh its weaknesses.

On the other hand, if you’re writing applications for the public Internet, or larger intranet
applications (e.g., more than a few person-month’s work), you’ll be aiming for fast download speeds and cross-browser compatibility, built with higher-quality, well-architected code suitable for automated testing, in which case MVC will deliver significant advantages for you.

Saturday, November 27, 2010

Who Should Use ASP.NET MVC?

As with any new technology, its mere existence isn’t a good reason for adopting it (despite the natural tendencies of software developers). Let’s consider how the MVC Framework compares with its most obvious alternatives.

Comparisons with ASP.NET Web Forms
You’ve already heard about the weaknesses and limitations in traditional ASP.NET Web Forms from my previous post, and how ASP.NET MVC overcomes many of those problems. That doesn’t mean that Web Forms is dead, though; Microsoft is keen to remind everyone that the two platforms go forward side by side, equally supported, and both are subject to active, ongoing development.

In many ways, your choice between the two is a matter of development philosophy.

• Web Forms takes the view that UIs should be stateful, and to that end adds a
sophisticated abstraction layer on top of HTTP and HTML, using ViewState and
postbacks to create the effect of statefulness. This makes it suitable for drag-anddrop
Windows Forms–style development, in which you pull UI widgets onto a
canvas and fill in code for their event handlers.

• MVC embraces HTTP’s true stateless nature, working with it rather than fighting
against it. It requires you to understand how web applications actually work; but
given that understanding, it provides a simple, powerful, and modern approach to
writing web applications with tidy code that’s easier to extend and maintain over
time, free of bizarre complications and painful limitations.

There are certainly cases where Web Forms is at least as good as, and probably better than, MVC.
The obvious example is small, intranet-type applications that are largely about binding grids directly to database tables or stepping users through a wizard. Since you don’t need to worry about the bandwidth issues that come with ViewState, don’t need to be concerned with search engine optimization, and aren't bothered about unit testing or long-term maintenance, Web Forms’ drag-and-drop development strengths outweigh its weaknesses. On the other hand, if you’re writing applications for the public Internet, or larger intranet applications (e.g., more than a few person-month’s work), you’ll be aiming for fast download speeds and cross-browser compatibility, built with higher-quality, well-architected code suitable for automated
testing, in which case MVC will deliver significant advantages for you.

What’s Wrong with ASP.NET Web Forms?

Traditional ASP.NET Web Forms was a fine idea, and a thrilling prospect at first, but of course reality turned out to be more complicated. Over the years, real-world use of Web Forms uncovered a range of weaknesses:

• ViewState weight:
The actual mechanism of maintaining state across requests(ViewState) often results in giant blocks of data being transferred between client and server. It can reach hundreds of kilobytes in many real-world applications, and it goes back and forth with every request, frustrating site visitors with a long wait each time they click a button or try to move to the next page on a grid.
ASP.NET AJAX suffers this just as badly,1 even though bandwidth-heavy page updating is one of the main problems that Ajax is supposed to solve.

• Page life cycle:
The mechanism of connecting client-side events with server-side event handler code, part of the page life cycle, can be extraordinarily complicated and delicate. Few developers have success manipulating the control hierarchy at runtime without getting ViewState errors or finding that some event handlers mysteriously fail to execute.

• False sense of separation of concerns:
ASP.NET’s code-behind model provides a means to take application code out of its HTML markup and into a separate codebehind class. This has been widely applauded for separating logic and presentation, but in reality developers are encouraged to mix presentation code
(e.g., manipulating the server-side control tree) with their application logic (e.g., manipulating database data) in these same monstrous code-behind classes. Without better separation of concerns, the end result is often fragile and unintelligible.

• Limited control over HTML:
Server controls render themselves as HTML, but not necessarily the HTML you want. Prior to version 4, their HTML output usually failed to comply with web standards or make good use of CSS, and server controls generated unpredictable and complex ID values that are hard to access using JavaScript. These problems are reduced in ASP.NET 4.

• Leaky abstraction:
Web Forms tries to hide away HTML and HTTP wherever possible. While trying to implement custom behaviors, you’ll frequently fall out of the abstraction, forcing you to reverse-engineer the postback event mechanism or perform perverse acts to make it generate the desired HTML. Plus, all this abstraction can act as a frustrating barrier for competent web developers. For
example, rich client-side interactivity is made excessively difficult because all client-side state can be blown away at any moment by a postback.

• Difficulty applying automated tests:
When ASP.NET’s designers first set out their platform, they could not have anticipated that automated testing would become the mainstream part of software development that it is today. Not surprisingly, the tightly coupled architecture they designed is totally unsuitable for unit testing. Integration testing can be a challenge too, as I’ll explain in within next few days.

Thursday, November 25, 2010

How to download ".ZIP" file from FTP server and unzip it into a local folder!

I think you guys can easily understand the logic of below code to get good understanding of how to do it.

please go through the code sample and comment lines, if you have any further suggestion and explanations please send me [usgamage@gmail.com]


NOTE: Here I am using a third party ".dll" file named "ICSharpCode.SharpZipLib.Zip.dll" to unzip the folder.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

using System;
using System.Collections.Generic;
using System.Text;
using System.Net;
using System.IO;
using System.Xml;
using ICSharpCode.SharpZipLib.Zip;
using System.Collections;

namespace XMLFileDownloader
{
public class XMLFileDownloader
{
//Variable declarations for FTP login credentials to the FTP ServerURI
string ftpUSER = string.Empty;
string ftpPassword = string.Empty;
string ftpServerURI = string.Empty;
string LocDirPath = string.Empty;
string ZIPFileName = string.Empty;
bool deleteZipFile = false;
int Count = 0;

///
/// Method to get Configuration values from SSIS config File
///

///
///
public int ReadConfigData(string ConfigFile)
{
XmlDocument xmldoc = null;

try
{
xmldoc = new XmlDocument();
xmldoc.Load(ConfigFile);
XmlNodeList nodeList = xmldoc.DocumentElement.ChildNodes;

foreach (XmlElement element in nodeList)
{
if (element.Name == "Configuration")
{
switch (element.Attributes["Path"].InnerText)
{
case "ftpUSER": ftpUSER = element.ChildNodes[0].InnerText.ToString().Trim().Length != 0 ? element.ChildNodes[0].InnerText.ToString() : "";
break;
case "ftpPassword": ftpPassword = element.ChildNodes[0].InnerText.ToString().Trim().Length != 0 ? element.ChildNodes[0].InnerText.ToString() : "";
break;
case "ftpServerURI": ftpServerURI = element.ChildNodes[0].InnerText.ToString().Trim().Length != 0 ? element.ChildNodes[0].InnerText.ToString() : "";
break;
case "LocDirPath": LocDirPath = element.ChildNodes[0].InnerText.ToString().Trim().Length != 0 ? element.ChildNodes[0].InnerText.ToString() : "";
break;
case "ZIPFileName": ZIPFileName = element.ChildNodes[0].InnerText.ToString().Trim().Length != 0 ? element.ChildNodes[0].InnerText.ToString() : "";
break;
case "DeleteZIPFile": deleteZipFile = Convert.ToBoolean(element.ChildNodes[0].InnerText.ToString().Trim());
break;
}
}
}
}

catch (Exception Exception)
{
Console.WriteLine("Configuration file invalid" + Exception.Message);
}

return GetFileList();
}

///
/// Initiate config values
///

public int IniConfig(string ftpUser, string ftpPwd, string uri, string locFilelocation, bool deletefile)
{
ftpUSER = ftpUser;
ftpPassword = ftpPwd;
ftpServerURI = uri;
LocDirPath = locFilelocation;
deleteZipFile = deletefile;

int _totDownload = GetFileList();
if (_totDownload > 0)
{
ZipFiles();
}
return _totDownload;
}

///
/// Methos to Get Download .xml and .csv files into
/// the local directory
///

///
public int GetFileList()
{
string[] downloadFiles;
StringBuilder result = new StringBuilder();
WebResponse response = null;
StreamReader reader = null;
try
{
FtpWebRequest reqFTP;
reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri(ftpServerURI));
reqFTP.UseBinary = true;
reqFTP.Credentials = new NetworkCredential(ftpUSER, ftpPassword);
reqFTP.Method = WebRequestMethods.Ftp.ListDirectory;
reqFTP.Proxy = null;
reqFTP.KeepAlive = false;
reqFTP.UsePassive = true;
response = reqFTP.GetResponse();
reader = new StreamReader(response.GetResponseStream());

string line = reader.ReadLine();
while (line != null)
{
result.Append(line);
result.Append("\n");
line = reader.ReadLine();
}

if (result.ToString() != null)
{
result.Remove(result.ToString().LastIndexOf('\n'), 1);

downloadFiles = result.ToString().Split('\n');

foreach (string file in downloadFiles)
{
if (Path.GetExtension(file) == ".zip")
{
if (Download(file))
{
Count = Count + 1;
Console.WriteLine("Tot. Zip file Downloaded: " + Count);
}
}
}
}
}

catch (Exception ex)
{
if (reader != null)
{
reader.Close();
}
if (response != null)
{
response.Close();
}

Console.WriteLine(ex.Message);
;
}

finally
{
downloadFiles = null;
}

return Count;
}

///
/// This method list all ".zip" files and pass each filename fileunzip method
///

///
public void ZipFiles()
{
int totFiles = 0;
try
{
ArrayList _zipFileList = GenerateFileList(LocDirPath); // generate file list

if (_zipFileList.Count > 0)
{
foreach (string singleFile in _zipFileList)
{
UnZipFiles(singleFile);
totFiles = totFiles + 1;
Console.WriteLine("Total Files extracted: " + totFiles);
}
}
}
catch (Exception ex)
{
Console.WriteLine("Error in listing all '.ZIP' files into string array" + ex.Message);
}
}

///
/// This method list all .zip files in given directory
///

///
///
public static ArrayList GenerateFileList(string Dir)
{
ArrayList fils = new ArrayList();
bool Empty = true;

foreach (string file in Directory.GetFiles(Dir)) // add each file in directory
{
if (file.Contains(".zip"))
{
fils.Add(file);
Empty = false;
}
}

if (Empty)
{
if (Directory.GetDirectories(Dir).Length == 0)
// if directory is completely empty, add it
{
fils.Add(Dir + @"/");
}
}
return fils; // return file list
}

///
/// This method will extract the given .zip file into the local dir.
///

public void UnZipFiles(string zipPathAndFile)
{
ZipInputStream s = new ZipInputStream(File.OpenRead(zipPathAndFile));
ZipEntry theEntry;
string tmpEntry = String.Empty;
while ((theEntry = s.GetNextEntry()) != null)
{
string fileName = Path.GetFileName(theEntry.Name);
if (fileName != String.Empty)
{
if (theEntry.Name.IndexOf(".ini") < 0)
{
string[] _str = theEntry.Name.Split('/');
string _currentFile = _str[_str.Length - 1].ToString();

string fullPath = LocDirPath + "\\" + _currentFile;
fullPath = fullPath.Replace("\\ ", "\\");
string fullDirPath = Path.GetDirectoryName(fullPath);
if (!Directory.Exists(fullDirPath)) Directory.CreateDirectory(fullDirPath);
FileStream streamWriter = File.Create(fullPath);
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
streamWriter.Write(data, 0, size);
}
else
{
break;
}
}
streamWriter.Close();
}
}
}
s.Close();
if (deleteZipFile)
File.Delete(zipPathAndFile);
}

///
/// Method to download individual file
///

///
public bool Download(string file)
{
bool isDownloaded = false;
try
{
string uri = ftpServerURI + "/" + file;
Uri serverUri = new Uri(uri);
if (serverUri.Scheme != Uri.UriSchemeFtp)
{
return isDownloaded;
}
FtpWebRequest reqFTP;
reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri(ftpServerURI + "/" + file));
reqFTP.Credentials = new NetworkCredential(ftpUSER, ftpPassword);
reqFTP.KeepAlive = false;
reqFTP.Method = WebRequestMethods.Ftp.DownloadFile;
reqFTP.UseBinary = true;
reqFTP.Proxy = null;
reqFTP.UsePassive = true;
FtpWebResponse response = (FtpWebResponse)reqFTP.GetResponse();
Stream responseStream = response.GetResponseStream();
FileStream writeStream = new FileStream(LocDirPath + "\\" + file, FileMode.Create);
int Length = 2048;
Byte[] buffer = new Byte[Length];
int bytesRead = responseStream.Read(buffer, 0, Length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer, 0, Length);
}
writeStream.Close();
response.Close();

isDownloaded = true;
}

catch (WebException e)
{
Console.WriteLine(e.Message, "Download Error");
}
catch (Exception ex)
{
Console.WriteLine(ex.Message, "Download Error");
}

return isDownloaded;
}
}
}

Monday, October 25, 2010

Check Record Exists in SQL Server Database [C#.NET] via Checksum field

Main Function:

public void Add(Dictionary fields, string ConnString)
{
if(_SqlConnection != null && _SqlConnection.State == ConnectionState.Open)
{
foreach (KeyValuePair _item in fields)
{
_columnn += (_columnn.Trim().Length != 0 ? "," : "") + _item.Key ;
_values += (_values.Trim().Length !=0 ? ",'" :"'" ) + _item.Value + "'";

if(_columnn == "ID")
{
_chkSum = ComputeCheckSum(_values);
}
}

if (RecordExists(ref _SqlConnection, "SELECT CheckSumValue
FROM dbo.CER_PubMed
WHERE CheckSumValue = " + "'" + _chkSum + "'"))
{
// record found in DB, lets do record found task
//UPDATE
//String _sqlUpdate = "UPDATE dbo.CER_PubMed
SET CheckSumValue = 27
WHERE CheckSumValue =" + _chkSum;


Console.WriteLine("Record exists");
}
else
{
// record not found in DB, lets do record not found task
_columnn += ",CheckSumValue";
_values += ",'" + _chkSum + "'" ;

//INSERT
_sql = string.Format(_sql, _columnn, _values);
SqlCommand _command = new SqlCommand(_sql, _SqlConnection);
_command.ExecuteNonQuery();

Console.WriteLine("Record not found");
}
_SqlConnection.Close();
}
}

*************************************************************************************

Validate Function:

public bool RecordExists( ref System.Data.SqlClient.SqlConnection _SqlConnection, string _SQL)
{
SqlDataReader _SqlDataReader = null;
try
{
SqlCommand _SqlCommand = new SqlCommand(_SQL, _SqlConnection);
_SqlDataReader = _SqlCommand.ExecuteReader();
}

catch (Exception _Exception)
{
// Error occurred while trying to execute reader
// send error message to console (change below line to customize error handling)
Console.WriteLine(_Exception.Message);
return false;
}

if (_SqlDataReader != null && _SqlDataReader.Read())
{
// close sql reader before exit
if (_SqlDataReader != null)
{
_SqlDataReader.Close();
_SqlDataReader.Dispose();
}
// record found
return true;
}
else
{
// close sql reader before exit
if (_SqlDataReader != null)
{
_SqlDataReader.Close();
_SqlDataReader.Dispose();
}

// record not found
return false;
}
}

************************************************************************************

Compute CheckSum:

public string ComputeCheckSum(string chkFieldStr)
{
string _chkSum = string .Empty ;
char[] char1a = null;
Byte[] byte1a = null;
byte[] hash1 = null;

if (!string.IsNullOrEmpty(chkFieldStr))
{
char1a = chkFieldStr.ToCharArray();
byte1a = new byte[char1a.Length];

for (int i = 0; i <>
{
byte1a[i] = (Byte)char1a[i];
}

hash1 = ((HashAlgorithm)CryptoConfig.CreateFromName("SHA1")).ComputeHash(byte1a);
return _chkSum = BitConverter.ToString(hash1) ;
}
return _chkSum ;
}

*************************************************************************************

Wednesday, September 22, 2010

Share Point 2010: Create a custom site definition Template (Topic Site Template)

This developer note will teach you the basics of building a “Topic Site – Level” template (Custom Site Template) using Visual Studio 2010

Getting Started:

Start by running Visual Studio 2010 (I’ll call it “VS-2010” from now on) and select New Project from start screen. (Figure: VS Welcome Screen)

Figure: VS Welcome Screen


Create new “Topic Site - Level” template project (Custom Site Template).

To create custom site template, we need to create a new “Site Definition” project using Visual Studio 2010.

You can create custom site template using Visual Basic or Visual C#. For now, Select Visual C# on the left, then pick "Site Definition" project under the category named “Sharepoint” then “2010”. Name your new project "TopicSiteLevelDemo" and click OK. (Figure: Create New Site Definition project)

Figure: Create New Site Definition project


“Sharepoint Customization Wizard” window opens, specify the site URL and security level for debugging

(Ex: http://usg:1909/). Click on “Validate” button for site URL validation and then “Finish”. (Figure: SharePoint Customization Wizard)

Figure: SharePoint Customization Wizard

On the right-hand side is the Solution Explorer showing all the files and folders in your application. The big window in the middle is where you edit your code and spend most of your time. Visual Studio used a default template for the Site Definition project you just created, so you have a working application right now without doing anything! This is a simple " TopicSiteLevelDemo!” project, and it is a good place to start for our application. (Figure: Default template for the Site Definition File)

Figure: Default template for the Site Definition File

Out of the box this default template gives you three files; “Onet.xml”, “webtemp.xml” and “default.aspx” files.

Let me explain each files in brief which relevant to the “Topic Site Level - 1” template.

Onet.xml File

In an Onet.xml file, the Feature element is used within a site definition configuration to contain a reference to a Feature instance and default property values. The Configuration element specifies lists and modules to use when creating SharePoint sites. For information about the format and elements used in site definitions.

SharePoint Foundation activates Features specified within the Onet.xml file in the order that they are listed. Consequently, you must specify Features that are depended upon before Features that depend upon them.

The default “onet.xml” file uses the default “V4.master” file, for the site definition’s master page.

For “Infor Sales Portal Topic site levels”, used custom master page named, “infor_V4.master”.

This defined it “onet.xml” file under the tags named, “” and “(Figure: Code sample for custom mater page defining)

Figure: Code sample for custom mater page defining


Configurations Element

Each configuration element in the configurations section specifies the lists, modules, and Features that are created by default when the site definition configuration or Web template is instantiated.

ListTemplates Element

The ListTemplate section specifies the list definitions that are part of the “Topic Site” template. Each ListTemplate element specifies an internal name that identifies the list definition and also specifies the display name for the list definition. Example: “Topic Site Announcements”

Each List element specifies the title of the list definition and the URL for where to create the list. (Figure: ListTemplate element of “TopicS ite – Level 1” onet.xml)

Figure: ListTemplate element of “TopicSite – Level 1” onet.xml

SiteFeatures Element

The “SiteFeature” element contains references to site to site collection and site – scoped features to include in the site definition.

WebFeatures Element

The “WebFeature” element contains references to site to site collection and site – scoped features to include in the site definition.

Modules Element

The Modules collection specifies a pool of modules. Any module in the pool can be referenced by a configuration if the module should be included in Web sites that are created from the configuration. Each Module element in turn specifies one or more files to include, often for Web Parts, which are cached in memory on the front-end Web server along with the schema files. You can use the Url attribute of the Module element to provision a folder as part of the site definition. This markup is supported only for backward compatibility. New modules should be incorporated into Features. (Figure: Module element of “TopicSite – Level 1” onet.xml)

Figure: Module element of “TopicSite – Level 1” onet.xml


Getting SharePoint in-buit list web part references in onto the “InforSalesPortal” Home Page; (Figure: element of “TopicSite – Level 1” onet.xml)

Figure: element of “TopicSite – Level 1” onet.xml


Web* Temp.xml File

Example: “webtemp_TopicSiteLevel1.xml”

Each server in a deployment of Microsoft SharePoint Foundation has at least the originally installed WebTemp.xml file located in the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\LCID\XML folder, where LCID is the numeric ID of the language/culture, such as 1033 for English. There may also be one or more custom WebTemp*.xml files. The WebTemp*.xml files contain an itemization of the site definition configurations that are available in the UI for users to select when creating a new Web site. The UI varies depending on whether the Silverlight or HTML site creation page is being used.

The Template element specifies the site definitions that are being made available in the WebTemp*.xml file. Each site definition is defined with a Template element. Each site definition has one or more site definition configurations that can be used to instantiate sites. Each Template element specifies a unique ID and a name that corresponds to a site definition subfolder within the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates folder. (Example: “%ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates\TopicSiteLevel1”)

A Template element can contain any number of Configuration child elements. Each such child represents a site definition configuration.

The ID attribute of each Configuration element corresponds to the ID of another Configuration element that is in an Onet.xml file. The second Configuration element specifies the lists and modules of the site definition configuration.

Each Configuration element in a WebTemp*.xml file also specifies the title and description (and the path to the image) of the configuration that is displayed in the SharePoint Foundation UI when a user is creating a new site. A configuration can be hidden from the user interface (UI) by setting its Hidden attribute to TRUE.

The DisplayCategory attribute of a Configuration element in a WebTemp*.xml specifies the category of site type that the site appears under in the UI; for example, "Infor" (Figure: “webtemp_TopicSiteLevel1.xml” file of “Topic Site – Level 1” template)

Figure: “webtemp_TopicSiteLevel1.xml” file of “Topic Site – Level 1” template


default.aspx file

This is the Topic Site template home or content area and layout of the Site Definition.(Figure: default.aspx file)

Figure: default.aspx file