To content | To menu | To search

Wednesday 3 April 2013

SharePoint 2013 Term Property Web Part and Category Image

If you’ve played with the new SharePoint 2013 Managed Navigation you probably came across this screen for your TermSet:


It says that if you reference an image in the Category Image property, you can display it by using the Term Property Web Part.

That’s true, but it my case it was not that straightforward.
When you open the properties of the Term Property Web Part you can see this: 
As you can see, no trace of "Category Image" for the Render Property.

Fire up reflector, and in the RenderWebPart method of the TermProperty you will find the value to put as the Custom Property:


The value is "_Sys_Nav_CategoryImageUrl"


And here is the result: 

Happy testing !

Wednesday 2 September 2009

Wake Up SharePoint Sites on multiple Front Ends

So, you have multiple Front Ends behind a Load Balancer and you want to wake up each front end to enable the content to be served as fast as possible. 
Here is a small architecture with 3 Front Ends (click to zoom):
  SharePoint Architecture 
Front End A: 
Front End B: 
Front End C: 
Each Front Ends will respond to because their IIS Host Headers are configured to “”, but if you use this URL its the “Load Balancer” which is going to reply. 
So, How can you access on Front End A ? 
You have to send the HTTP Request to the Front End A’s IP Address ( using “” as the host value in the HTTP Request. 
Here is the HTTP Request you must build to access :
GET / HTTP/1.1 
Accept: application/x-ms-application, */*
Accept-Language: en,fr-BE;q=0.5
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Then you have to send it to Front End A’s IP Address (  How to send that request to that IP Address ? There are many ways to do that:
  1. Using System.Net.Sockets.Socket
  2. Using WebClient or WebRequest (method A)
  3. Using WebClient or WebRequest (method B)
First Method (Using System.Net.Sockets.Socket): 
There is a major drawback with this method, you’ll have to handle authentication manually (easy with Basic Authentication, pretty hard with Ntlm or Kerberos)  

string strHttpRequest = String.Concat("GET / HTTP/1.1rn", "Accept: application/x-ms-application, */*rn", "Accept-Language: en,fr-BE;q=0.5rn", "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729rn", "Accept-Encoding: gzip, deflatern", "Host: intranet.fabrikam.comrn", "Connection: Keep-Alivernrn"); //Create a IPv4 TCP Socket 
Socket socketClient = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
//Connect to the Front End A's IP address on HTTP Port (80) 
socketClient.Connect(IPAddress.Parse(""), 80);
//Send the Http Request 
StringBuilder sbHttpResponse = new StringBuilder();
byte[] bReceiveBuffer = new byte[4096]; int iReceivedBytes = 0;
//Process the Http Response 
while ((iReceivedBytes = socketClient.Receive(bReceiveBuffer)) > 0)
sbHttpResponse.Append(Encoding.Default.GetString(bReceiveBuffer, 0, iReceivedBytes));
//Output the Http Response 

Second Method (Using WebClient or WebRequest (method A)):
Apparently this method will only work with .NET FrameWork 4.0, actually it doesn't (.NET 3.5) because you cannot alter the “Host” header value(
Additionally, there is a way (hack) to access the internal collection of headers to bypass the validation: 

//Create a HttpWebRequest to Front End A's IP Address 
HttpWebRequest myHttpWebRequest = (HttpWebRequest)HttpWebRequest.Create("");
//Modify Host header to reach 
myHttpWebRequest.Headers[HttpRequestHeader.Host] = "";
//Process Response HttpWebResponse 
myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
Stream streamResponse = myHttpWebResponse.GetResponseStream();
StreamReader streamReader = new StreamReader(streamResponse);
string strHttpResponse = streamReader.ReadToEnd();
//Output Response 

Third Method (Using WebClient or WebRequest (method B)): 
Using the Proxy property of WebClient you can target and connect to
//Create a WebClient 
WebClient webClient = new WebClient();
//You can eventulay set a Username / Password as NetworkCredentials 
webClient.Credentials = new NetworkCredential("username", "password");
//The Front End A's IP Address to connect to 
webClient.Proxy = new WebProxy("");
//Will set the correct Host Header 
Stream streamResponse = webClient.OpenRead("");
//Process the HttpResponse 
StreamReader streamReader = new StreamReader(streamResponse);
string strHttpResponse = streamReader.ReadToEnd();
//Output the HttpResponse 
streamResponse.Close(); webClient.Dispose();
That’s all for today !

Sunday 30 August 2009

Silverlight 3 in SharePoint 2007 with Document Preview (Part 0)

Here is the first part of my guide to add Document Preview in SharePoint 2007 and Silverlight 3 to preview them.

To achieve this, I’ll be using these technologies/components (more will come):

  • Silverlight 3.0
  • SharePoint 2007 (either WSS 3.0 or MOSS)
  • Microsoft SharePoint SilverView (available on CodePlex)
  • Windows Communication Foundation

The big picture (subject to updates along the posts):


So, We’ll take a standard SharePoint architecture and extends it to generate Document Preview and allow Silverlight to upload files to a document library.

Things we’ll do:

  • Add some WCF in SharePoint
  • Extend the Microsoft SharePoint Silverview sample to get thumbnails of Documents in SharePoint
  • Create Document Preview of (at least) Microsoft Office Documents


What are new in that big pictures ?

  • Thumbnails Service WCF
    • Allow Document Preview generation
  • Upload Helper WCF
    • Enable Silverlight application to upload file to a document library (we’ll review why i choose this way)
  • The extended Microsoft SharePoint Silverview


In the next post, I’ll briefly introduce Microsoft SharePoint Silverview and extend it to handle more than Image preview.

Sunday 24 February 2008

CISCO IOS (Part 5)

When we allocate memory to a process “malloc”, the Pool Manager takes a free memory block and attaches it to a process. The Pool Manager maintains a table of contiguous memory blocks. When a process frees a memory block, the Pool Manager tries to concatenate it with its neighbors. Despite of this concatenation, fragmentation is unavoidable. An extremely fragmented memory can lead to “malloc” errors “%SYS-2-MALLOCFAIL”. Indeed, the memory available can be sufficient, but not enough contiguous free blocks available to allow a requested “malloc”. image image image In the last figure, there are only small non-contiguous freed blocks: Now, if a process wants to allocate a larger memory area, it will not be able to do it and it’ll have a “MALLOCFAIL” error. The Chunk Manager will try to avoid this kind of situation by allocating the memory to processes cleverly. The Chunk Manager is responsible of the Chunks Allocation. A Chunk contains a finished number of memory blocks of equal size. If we use every memory blocks of a Chunk, the Chunk Manager will allocate a new memory area (Sibling). If no more blocks of this “Sibling” are used, it is freed “Freed/Trimmed”. A process now has a larger allocated memory area split into smaller memory blocks. When a process frees a memory block, it does in its Chunk. There is no more fragmentation due to different processes freeing memory. image
#show chunk
Chunk Manager:  407660 chunks created, 406281 chunks destroyed  9349 siblings created, 406279 siblings trimmed
Chunk          element  Block   Maximum  Element  Element
Total Cfgsize   Ohead    size   element    inuse   freed  Ohead    Name
16               4       940       33        2      31     360   String-DB owne 0x654C2408
16               4       940       33        0      33     360   String-DB cont 0x654C27B4
312              16    65588      197       38     159    4072   Extended ACL e 0x654C3484
96               16    20052      171        9     162    3584   ACL Header 0x654D34B8
8536             0     65588        7        1       6    5784   Parseinfo Bloc 0x654DA14C
16               0       456       15        1      14     164   tokenQ node 0x654EA180
20               0       456       13       13       0     144   Chain Cache No 0x654EA348
20               0       456       13        6       7     144   (sibling) 0x66BD0E50
20               0       460       13       13       0     148   (sibling) 0x66F33EE0
Within a Chunk, the memory blocks have a header (or not, size = 0). The header size per chunk is fixed (0, 4, 16, 20, 24). Example: ACL Header is a Chunk of 171 elements max, with 96 elements since creation. Each block have a header of 16Bytes and weight 20,052Bytes. Actually there are only 9 memory blocks in use within this Chunk. Next post: The Buffers.

Saturday 23 February 2008

CISCO IOS (Part 4)

Every OS includes a memory manager. Current OS uses protected memory architecture. The process x cannot read/write the memory of a process y. To enable communication between process x and y, it requires different techniques (Shared Memory, Message Queues, Pipes, Network Connections …). These techniques enhance inter-processes security but reduce performance. The IOS does not implement Shared Memory; any process can access all the memory without restrictions. A process is free to communicate with another one by writing directly into its memory. (Buffer Overflow = Crash, I’ll come back later on this by explaining the Memory Block Architecture). The IOS can mark memory as R/O or R/W. The IOS has memory pools, driven by the Pool Manager:
#show memory
Head       Total(b)     Used(b)    Free(b)    Lowest(b)   Largest(b)
653B8C20   155481056    86243592   69237464   68168948    67670028
EE800000    25165824     5269012   19896812   19819968    19871932
Above, a processor and an I/O memory pool:
  • Head : Start Address of the memory pool
  • Total : Pool size in bytes
  • Used : Total amount of bytes currently used in the pool
  • Free : Total amount of bytes currently free in the pool
  • Lowest : The history’s lowest amount of bytes free since last restart
  • Largest : The Largest free contiguous memory block
These memory pools belong to Memory Regions driven by the Region Manager:
#show region
Region Manager:
Start       End            Size(b)  Class  Media  Name
0x0E800000  0x0FFFFFFF    25165824  Iomem  R/W    iomem:(iomem)
0x60000000  0x6E7FFFFF   243269632  Local  R/W    main
0x6000F000  0x632FFFFF    53415936  IText  R/O    main:text
0x63300000  0x64DFFCFF    28310784  IData  R/W    main:data
0x64DFFD00  0x653B8C1F     6000416  IBss   R/W    main:bss
0x653B8C20  0x6E7FFFFF   155481056  Local  R/W    main:heap
0x80000000  0x8E7FFFFF   243269632  Local  R/W    main:(main_k0)
0xA0000000  0xAE7FFFFF   243269632  Local  R/W    main:(main_k1)
0xEE800000  0xEFFFFFFF    25165824  Iomem  R/W    iomem
The Processor Memory Pool belongs to main:heap region. This region belongs to the main region starting at 0×60000000 and ending at 0×6E7FFFFF. The I/O Memory Pool belongs to iomem region starting at 0xEE800000 and ending at 0xEFFFFFFF. Within the main region:
  • main:text : contains IOS’s code in R/O (IText)
  • main:data : contains Initialized variables R/W (IData)
  • main:bss : contains Uninitialized variables R/W (IBss)
  • main:heap : contains standard local memory structures R/W
  • iomem : contains devices memory (I/O bus memory)
We can notice that some of the regions are redundant: main:(main_k0); main:(main_k1); They are all equals to the main region. We’re also able to find iomem in two different addresses: 0×0E800000->0×0FFFFFFF and 0xEE800000->0xEFFFFFFF. But it’s still the same memory area. From a CISCO device to another, the type of memory used for each region may change. For a Router x, the iomem is SRAM, but for another the same region is DRAM. The Pool Manager defines its memory pools within the regions, regardless of the memory type used (hardware abstraction). Let’s go back to the Pool Manager. By entering the “show memory processor” command, we can notice that the memory is divided into memory blocks:
#show memory processor
Processor memory
Address      Bytes     Prev     Next Ref     PrevF    NextF  Alloc PC  what
65A817E0 0000000084 65A8175C 65A81864 001  -------- -------- 628215E8  Init
65A81864 0000001372 65A817E0 65A81DF0 001  -------- -------- 608E3218  Skinny Socket Server
65A81DF0 0000001156 65A81864 65A822A4 001  -------- -------- 608E3218  Skinny Socket Server
  • Address : Start of block
  • Bytes : Size of block
  • Prev : Previous block Address (linkage)
  • Next : Next Block Address (linkage)
  • Ref : How many process are using this block?
  • PrevF : Previous free block
  • NextF : Next free block
  • Alloc PC : Allocating Process
  • What : Block owner’s process name
Inside a Memory Block :
#show memory 0x65A81864
65A81860:          AB1234CD 010B0000 66B4EBEC      +.4M....f4kl
65A81870: 6395634C 608E3218 65A81DF0 65A817F4  c.cL`.2.e(.pe(.t
65A81880: 800002AE 00000001 605C3DD4 00000112  ........`=T.... ...
65A81DE0:                            FD0110DF              }.._
After some researches we can guess some of the fields from the Block’s header:
65A81860:            [MAGIC  ]  [PID   ] [?       ]
65A81870: [PTR_NAME] [ALLOCPC]  [NEXT  ] [PREV+20d]
65A81880: [SIZE*] [REF ] [?  ] [ DATA ->] ...
65A81DE0:                     [<- DATA] [MAGIC   ]
  • MAGIC_START is always 0xAB1234CD
  • PID : Process ID
  • PTR PS_ NAME : Pointer to the Memory Block’s owner process name
  • ALLOC_PC : Same value as in “show memory processor” command
  • NEXT : Next Block Pointer
  • PREV : Previous Block Pointer + 20d (pointing to the Next Block Pointer of the previous block)
  • SIZE : The MSB (Most Significant Bit) of this Field is a flag where 1 states that the block is in use. The value read in this field differ from the one displayed using the “show memory processor” command. For both to be equals I need to do : (value read)*2+4 ????
  • REF : How many process are using this block?
  • DATA : …
  • MAGIC_END is always 0xFD0110DF (palindrome in hexadecimal)
Starting with this Memory Block, if I want to go to the next block without using the Next pointer, I must do: Block Address + sizeof(magic_start) + sizeof(header) + (2*[value read in Size field]+4) + sizeof(magic_end) So: 0×65A81864+0×4+0×24+0×560+0×4=0×65A81DF0 Checking the PID field:
#show process 0x0000010B [010B <> 0000]
Process ID 267 [Skinny Socket Server],
Memory usage [in bytes] Holding: 89644, Maximum: 109468, Allocated: 6601380, Freed: 6514500
Getbufs: 0,
Retbufs: 0,
Stack: 8908/12000
CPU usage PC: 60846DE4, Invoked: 26, Giveups: 1, uSec: 9384 5Sec: 0.00%, 1Min: 0.00%, 5Min: 0.00%, Average: 0.00% Age: 5007208 msec,
Runtime: 244 msec
State: Waiting for Event, Priority: Normal
Checking the PTR PS_NAME field:
#show memory 0x6395634C
63956340:                            536B696E              Skin
63956350: 6E792053 6F636B65 74205365 72766572  ny Socket Server
63956360: 00000000 0A496E76 616C6964 20736B69  .....Invalid ski
->Skinny Socket Server
Checking that there is currently an end of block (must have the FD0110DF magic number):
0x65A81864 + 2*[SIZE]+4 + SIZEOF(MAGICSTART) + [HEADERSIZE] = 0x65A81864 + 0X560 + 0x4 + 0x24
#show memory 0x65A81DEC
65A81DE0:                            FD0110DF              }.._
Now let’s talk about the crash resulting from a Buffer Overflow: If we want to make a buffer overflow in a memory area, so writing more bytes than allocated for this area, we’ll overwrite the next block header thus breaking the IOS memory linkage mechanism. What a pity! Or not! The IOS constantly checks the memory structures, if it finds any inconsistency, it’s forcing a crash. So to success with a buffer overflow (crash free), you need to re-write a coherent header on the next block. Do not forget to add a “jmp” to your code to jump just after the next block header! image In the next post: The Chunk Manager!

Friday 15 February 2008

CISCO IOS (Part 3)

Like I previously said, there are 4 process priorities, each with a dedicated queue. A queue contains n “Ready State” processes of priority x.
#sh processes
PID     QTy PC          Runtime (ms) Invoked uSecs      Stacks          TTY     Process
22      Mwe 628250A4    0               1       0       2516/3000       0       RMI RM Notify Wa
23      Hwe 60092C38    0               2       0       5508/6000       0       SMART
24      Msp 612FAFB0    76              170778  0       5540/6000       0       GraphIt
25      Mwe 613B193C    0               2       0       11504/12000     0       Dialer event
26      Mwe 624E81B0    0               1       0       5540/6000       0       SERIAL A'detect
27      Mwe 62AC6984    0               2       0       11516/12000     0       XML Proxy Client
28      M*      0       456             153298  0       9356/12000      194     SSH Process
29      Mwe 602AFC70    0               1       0       2484/3000       0       Inode Table Dest
30      Cwe 62818F90    0               1       0       5548/6000       0       Critical Bkgnd
31      Mwe 6033D7D8    96              102962  0       10208/12000     0       Net Background
32      Mwe 6033DA68    0               2       0       11420/12000     0       IDB Work
33      Lwe 6128EEEC    2184            9209    237     10088/12000     0       Logger
In this “show processes” output, we can see the 4 priorities and a Running State process. Within Q Column:
  • L : Low Priority (background process)
  • M : Medium Priority (default priority)
  • H : High Priority (Process processing received packets)
  • C : Critical Priority (Resources allocation, core system processes)
  • K : Killed
  • D : Crashed
  • X : Corrupted
Within Ty Column:
  • * : Running State Process
  • S : Process that called Suspend
  • E : Process waiting for events
  • rd : Ready State Process
  • we : Idle State Process waiting on events
  • sa : Idle State Process waiting on specific time
  • si : Idle State Process waiting on end of interval
  • sp : Idle State Process waiting on periodic interval
  • st : Idle State Process waiting on timer expiration
  • hg : Hang Process
  • xx : Dead Process
Other columns:
  • PID : Process ID …
  • PC : Process current code Stack Pointer (0 = running)
  • Runtime : Total amount of CPU time used since process creation
  • Invoked : Number of time process was Invoked
  • uSec : Average CPU time spent during each invoke
  • Stacks : Current process stack use/Total process stack use
  • TTY: Has process access to STDIN/STDOUT, if !0, points to specified TTY/VTY (#show line).
#show line
Tty     Line    Typ Tx/Rx       A Modem Roty AccO AccI  Uses    Noise Overruns    Int
0       0       CTY             -     -          -       -      -       0       0       0/0     -
1       1       AUX 9600/9600   -     -          -       -      -       0       0       0/0     -
*194    194     VTY             -     -          -       -      -       3       0       0/0     -
195     195     VTY             -     -          -       -      -       0       0       0/0     -
196     196     VTY             -     -          -       -      -       0       0       0/0     -
  • Process : Process name
Let’s go back to the Scheduler: image 1. The Scheduler starts with Critical Priority Stack and runs then removes all processes within. 2. If the Critical Stack is empty, the Scheduler moves on to High Priority Stack and runs then removes the first process. Once the first process was run, the scheduler checks that the Critical Stack is still empty. If it’s not, the Scheduler runs then removes all processes within it. The Scheduler moves on to the next High Priority Process. After each High Priority Process, the Scheduler checks if there is a higher priority process (Critical). If so, it runs then removes it. 3. If both Critical Stack and High Priority Stack are empty, the Scheduler moves on to Medium Priority Stack. Between each Medium Priority Process, the scheduler checks if there are any High Priority Processes to run. If there are, it runs the first one and removes it, then checks if there are any Critical Processes to run and runs then removes all of them. When the Critical stack is empty, the scheduler moves on to the next High Priority Process, and so on … 4. If the Critical/High/Medium Priority stacks are empty, the scheduler moves on to the Low Priority Stack. (If we are coming from the Medium Priority Stack, we do not move on to the Low Priority Stack. At the end of the Medium Priority Stack, we move on to Critical Priority Stack. When we’ve executed this step 15 times, we finally move to Low Priority Stack). Between each Low Priority Process, the scheduler checks if there are any Medium Priority Process to run. If there are, it runs and removes each Medium Priority Process but between each of them the scheduler checks if there are any High Priority Process to run. If there are, it runs then removes it, but between every High Priority Process, it checks if there are any Critical processes to run, to run and remove them all. 5. It goes back to step 1. If a process forgets to release the CPU within 4sec, no worry! The WatchDog Timer will kill it! The WatchDog Timer is refreshed by an Interrupt (every 4msec by the System Timer). The Incoming Packet Notification is also done using an Interrupt. So the IOS can instantaneously process the incoming packet. I will talk about the Packet Switching Methods soon. Next Post: IOS uses of the Memory

CISCO IOS (Part 2)

Like any OS, the IOS must implement a set of elemental principles:
  • Process Management
  • Memory Management
  • Devices Management
  • User Interface
Let’s talk about processes; the IOS is a multi-task cooperative Operating System (not preemptive). Each process is responsible for giving back the CPU to the next process. The IOS does not implement multithreading (1 Process = 1 Thread). The IOS uses the interrupt mechanism in order to achieve fast packet switching between interfaces. Each time a packet comes to an interface, an interrupt is launched. So, the CPU is able to process the packet in priority. The fact that a process is responsible for its CPU time can lead to lot of mayhem if, for example, a process enters an endless loop. The “WatchDog Timer” is an IOS’s special process that can kill a process taking up too much CPU time (default is 2*2sec). A process has 6 states:
  • Create State
    • The process is created by the Kernel or by the Parser (cli / config)
    • Resources allocation
  • Modify State (optional)
    • A terminal can be added to the process (by default: no std::in /out/err in Create State)
    • Some parameters can be added
  • Ready State
    • The process is ready to run
  • Running State
    • The process is on the CPU
    • It can:
      • Let another process take the CPU (Suspend). Go back to Ready State.
      • Run to completion then self-terminated
      • Wait for an event (Idle State)
  • Idle State
    • The process is waiting for a specific event (Wait for External Event)
  • Dead State
    • The process is self-terminated
    • The process was killed by the Kernel
    • The process has completed its job (Run to completion)
Process Lifecycle: image There are 4 process priorities (one queue FIFO/priority):
  • Critical
  • High
  • Medium
  • Low
The scheduler takes care to remove a process in Ready State from a priority queue and run it on the CPU (Running State). Notes:I could only find 4 priorities among the sources I consulted. However when we use the “show list” command:
#show list | inc Sched
8 650F2910 0/- Sched Preemptive
9 650F2310 0/- Sched Critical
10 650F21F0 0/- Sched High
11 650F1C10 2/- Sched Normal
12 650F1C60 0/- Sched Low
13 650F1E10 0/- Sched Preemptive ION
14 650F2180 262/- Sched Idle
15 650F0800 0/- Sched Dead
16 6535B660 0/- Sched Normal (Old)
17 6535B6C0 0/- Sched Low (Old)
We can see that there is a queue called “Sched Preemptive”. This queue is against the fact that the IOS is not preemptive! (Maybe my sources are pretty old …) This command also displays the number of processes on each of the queues. There are 262 processes in Idle State and 2 “Normal” priority processes in Ready State. A process moved to “Dead State” doesn’t automatically get its resources freed, hence the necessity of a “Dead Queue”. In my next post, I’ll explain the working of the IOS’s Scheduler.

Sunday 10 February 2008

CISCO IOS (Part 1)

I've been working with routers, switches and other CISCO’s peripherals for a long time ago. The Operating System of these peripherals is IOS (InterNetwork Operating System). I've always been using IOS without knowing the inner working of it. However, I recently read an article about CISCO’s IOS going open soon with an API for third party developers ( I thought, hey ! That would be interesting to know how the inner works take place inside this O.S. Embedded Event Manager ( is already an API that enables developers to react on specific events of the IOS: image
Figure from Cisco IOS Network Management Configuration Guide, Release 12.4T (source)
As we can see on the figure above, we can react to an event (CLI, SYSLOG, OIR, TIMER, …) or we can enhance the CLI by registering our script/applet as a command:

Router(config)#alias exec my_cli_command event manager run my_eem_applet Router#ma_cli_command will execute my_eem_applet

But with the IOS’s opening, we will be able to interact with the whole pieces of the IOS (not only events) and perhaps act as a process. We may be able to register as an Interrupt Handler … In the next post, I’ll detail the inner working of the IOS.