X Blogs

who you gonna call?

Azure trjana skladiščenja

clock February 25, 2015 09:35 by author Rok Bermež

Na voljo je javni predogled nove Azure storitve, ki nam omogoča visoko zmogljivi shranjevalni prostor za naše VMje. Tako nam Azure sedaj ponuja dve vrsti trajnega skladiščenja podatkov: Premium Storage in Standard Storage. Premium Storage shranjuje podatke trajno na SSD (Solid State Drives) diske in s tem zagotavlja visoko zmogljivost, nizko latenco ter konsistentnim in predvidljivim delovanjem.

Premium Storage je idealen za I/O občutljive obremenitve in tako super za baze podatkov, ki gostujejo v vaših oblačnih VMjih. Po želji lahko na eno VM pritrdite več Premium Storage diskov in tako povečate kapaciteto prostora namenjenega VMju do 32 TB pri več kot 50.000 IOPS z manj kot 1 milisekundo latence pri bralnih operacijah. Skupaj je to že kar zlobno hitra možnost hranjenja podatkov, ki omogoča delovanje še večih in večjih delovnih obremenitev v oblaku.

V oblak lahko tako sedaj preselimo tudi bolj zahtevne poslovne aplikacije – vključno z SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, in SAP Business Suite rešitve.

Velikosti in performanse

Premium Storage diski zagotavljajo do 5.000 IOPS in 200 MB / sec pretoka, odvisno od velikosti izbranega diska. Ko ustvarite nov Premium Storage disk dobite možnost izbora velikost diska in ostalih lastnosti, ki jih potrebujete glede na potrebe vaše aplikacije.

V predogledu Premium Storage storitev so na voljo:

Disk Types

P10

P20

P30

Disk Size

128 GB

512 GB

1 TB

IOPS per Disk

500

2300

5000

Throughput per Disk

100 MB/sec

150 MB/sec

200 MB/sec

Učinkovisto pa lahko še povečate z združevanjem večih diskov (do meje pasovne širine omrežj, ki je na voljo VM) . Več o tem pristopu si lahko preberete na https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx .

Trajnost

Stranke Azure oblaka imajo kritične aplikacije, ki so odvisne od trajnosti njihovih podatkov in visoke tolerance do napak, ki se lahko zgodijo pri dostopu do njih. Premium Storage ohranja tri kopije podatkov v isti regiji. Poleg tega lahko tudi po želji izdelate posnetke vaših diskov in kopirate na standardni GRS Storage, ki vam omogoča še dodatno geo-redundanco.

Če se želite prijaviti za uporabo predogled Azure Premium Storage storitve, obiščite Azure Predogled stran na http://azure.microsoft.com/en-us/services/preview/ .



Get data from Folder in Power Query

clock February 10, 2015 22:26 by author gašper

It seems that lately Excel users can be separated into two groups. One group that gets the new Excel (2010 or 2013) but still uses the application as if it's Excel 2003 and those are users that may have heard of the whole Power BI thing but think that the whole Power BI story can only benefit certain people. That is not true. Power BI is all in all a set of brilliant commands. This post will show you only one of them and that is a Get Data From Folder command in Power Query. But it will also show you how Get Data From Folder command in Power Query can make your life easier. Take the time and read this post and think about how to implement this in your work. Trust me, you will not regret it! Here's what this post will talk about.

If you get new data periodically in any form (txt, csv, xlsx…) then you know that what that means is that every time you have to go through the same process to get that data into Excel. Now this will be reduced to copying that file to a certain folder and clicking refresh in Excel. It doesn't get much better than that. But first, let's explain what is Power Query.

Power Query is an Excel Add-In that was created to help you get data into Excel or Power Pivot. It's part of Microsoft Power BI and is even incorporated into the new Microsoft Power BI Designer. Everything you read about in this post refers to Excel 2010 and Excel 2013 for which Power Query is available. And now for the Get data from folder command.

When I first found this command, my enthusiasm about Excel doubled over night (and that is saying a lot). Here's where it can really help you. Working with Excel will sooner or later get you in a position where you will be repeating one and same actions periodically. In the beginning that is great, but in time you would much rather be creating great dashboards and charts and not repeating the same imports or copying over and over again. Before Power Query was "born", the only tool at your disposal for easing of the process was VBA. You could write a macro that did the import and accounted for all specifics in your data. But now, things got even better. What follows is a nice example of that.

Let's say that every month, we get a new txt file that is just an export of data that we want in Excel so we can analyze it. This is how those txt files look like.

So every month we have to remove the top six rows, correctly import the columns and add them to previously imported data and analysis. Here's how you do it with Get data from folder command in Power Query.

On the Power Query tab select From File and then From Folder. This will open a Choose a folder dialog box, where you can choose any folder you like (even folders on SharePoint).

Once you choose the folder, you get what will soon become your favorite window in Excel. It's the Query Editor Window. And here the fun begins.

The above picture also shows two buttons we will use magnified, since one hides the dropdown menu of most used commands in Power Query. And we will use many of them during this process. But the other is even more important in this first stage and that is the double down arrow in the Content column.

The reason we need this button is that at this point, we are seeing the list of files in the folder but not the content of those files. But pressing this button will show exactly that.

Here's what we get after pressing it.

And now the fun begins. We will do some data transforming by mostly using the dropdown menu button we talked about before.

First off, we will delete the top six rows. So from the dropdown menu we choose Remove Top Rows…

As the three dots at the end of the command suggest, you get a dialog box, and all that it needs from you is the number of top rows, you wish to remove (in our case that is 6).

After that we will choose a simple Use First Row As Headers command.

Now if you can imagine, this took care of the first six rows from the first txt file. But the same six rows from other txt files in that folder are still there, appended bellow. So the next step must be removing those. We will be using the filter in the name column to do that. First a little trick. At the beginning, the filter only shows you a limited amount of records and you must use the Load more command shown below.

Now we remove the blanks (since all rows contain the name, than these must be the six rows at the beginning of all txt files)…

…and then remove the rows containing Name (these are the header rows of other txt files)

Now we have the data we need, we just have to tell Excel what that data is. So we set the data types of the columns. So we select the column and go to Home/Data Type and select the appropriate data type.

With that done, we attend to the specifics in our data. In the City column, a question mark has replaced the apostrophe so we should revert those changes. We will use the Transform/Replace Values command.

The dialog box will remind you of a classic Find and Replace Window.

And this is what you get

Now we will use the Close & Load command.

Ok, so now we did all that we do every time we get a txt file. More or less a same amount of work, so the question is, what we have gained or what is so special about this.

First and foremost, at this point we have imported four txt files at once and now have 4000 rows. But now we just got the new txt file with new data.

All we need to do is to copy that file to the same folder as the others.

And just right click the data in Excel and choose Refresh

And right away we get 1000 new rows from the new txt.

So Power Query will take all the files in that folder and repeat all the steps that we have defined before. Brilliant. But there's more. What if we get a txt file that needs a new rule or a change in the old set of rules? All we need to do is to right click the Query on the right and choose Edit and we are back in the Query editor and on the right you can delete or modify any of the steps that we defined in the original query.

 

Pure Brilliance. But if the new txt file would be totally different than the previous ones, you could define a new query for that one and then use the Append command to add the new data to our old Query.

Now do you believe me, that this is truly a game changer. Just imagine. You only do the hard work once and then it's just a matter of refreshing your data. And that is why its called POWER BI and POWER QUERY :)

This is a reposting of the original post from Excel Unplugged which can be found here.

 



NoSql podatkovna baza DocumentDB

clock February 10, 2015 10:40 by author Rok Bermež

Kopici Microsoftovim oblačnim storitev, ki nam omogočajo hranjenje podatkov se je pridružil Azure DocumentDb servis. Kot NoSql podatkovna baza je DocumentDb popolnoma brez sheme in nam tako omogoča shranjevanje kakršnega koli JSON dokumenta in poizvedovanje po njem z znano a vseeno nekoliko bolj dokumentno orientirano SQL sintakso, ki jo lahko tudi opcijsko razširimo s svojimi uporabniško definiranimi funkcijami (UDFs) napisanimi v JavaScript jeziku.

DocumentDb ja ‘designed’ za linearno skaliranje za zagotavljanje potreb vase aplikacije. Kupi se ga v posameznih enotah zmogljivosti (Capacity unit) od katerih vsaka ponuja dedicirane rezervirane visoke performanse hranjenja in pretočnosti. Enote zmogljivosti se lahko enostavno dodaja ali vzema stran preko Azure portal ali REST management APIja glede na trenutne potrebe, kar omogoča elastično skaliranje podatkovne baze po korakih s predvidljivo zmogljivostjo in brez aplikacijskih izpadov.

V zadnjem letu je Microsoft interno uporabljal DocumentDb storitev za kar nekaj svojih bolj odmevnih storitev. Trenutno imajo več DocumentDb baz večjih od par 100 TB, ki procesirajo več milijonov kompleksnih poizvedb na dan s predvidljivo zmogljivostjo nizko enomestno ms latence.

DocumentDB vas omogoča tudi optimizacijo performans s prilagajanjem strategij indeksiranja in konsistentnostih stopenj, ki jih želite za določeno aplikacijo ali scenarij, zaradi česar je izredno prilagodljiva in zmogljiva podatkovna storitev za svoje aplikacije. Za poizvedbe in branje operacij, DocumentDB ponuja štiri različne nivoje doslednosti – Strong’, Bounded Staleness, Session in Eventual. Te ravni doslednosti vam omogočajo, da enostavno naredite kompromis med konsistenco in performančno učinkovitostjo. Vsaka raven konsistentnosti je podprta s predvidljivo stopnjo performanse in zagotavlja doseganje zanesljivih rezultatov za vaše vloge.

DocumentDb stavi na vseprisotne formate kot so JSON, http in REST – kar nam omogoča enostavno koriščenje te storitve s kakršnekoli spletno ali mobilne aplikacije.

Danes imamo na voljo .NET, Node.js, JavaScript in Python SDKje, lahko pa se do DocumentDb storitev dostopa tudi preko RESTful HTTP vmesnikov.

Do DocumentDb storitve lahko pridete preko Azure preview portala (https://portal.azure.com/)

clip_image001

Več o sami uporabi bomo spoznali pa v prihodnji številki.



ASP.NET 5.0 novosti

clock December 27, 2014 09:37 by author Rok Bermež

Prvi predogled ASP.NETa 1.0 je prišel v javnost že skoraj 15 let nazaj. Od takrat smo ga milijoni razvijalcev uporabljali za razvoj in poganjanje fantastičnih spletnih aplikacij in se je preko vseh teh let konstantno razvijal ter dobival nove funkcionalnosti.

Naslednja različica, ki jo predvidoma pričakujemo nekje v zadnji polovici letošnjega leta pa bo ena izmed najpomembnejših arhitekturnih posodobitev, ki jih je ASP.NET platforma kadarkoli doživela. ASP.NET 5.0 bo namreč precej vitkejši, bolj modularen, imel podporo za različna okolja (cross-platform) in na splošno precej bolj optimiziran za delovanje v oblaku. Njegov predogled lahko preizkusite že sedaj, tako da si namestite CTP različico naslednje generacije Visual Studia 2015, ki lahko dobite na spletnem naslovu http://go.microsoft.com/fwlink/?LinkId=521794.

ASP.NET 5.0 bo odprtokodna spletna platforma za izgradnjo modernih spletnih aplikacij, ki bodo lahko tekle na Windows, Linux in Mac operacijskih sistemih. Vsebovala bo novo različico MVC tehnologije (MVC 6), ki združuje funkcionalnosti MVCja in WebAPIja v skupno razvijalno okolje. Prav tako bo ASP.NET 5.0 podlaga za SignalR, ki vam omogoča uporabo realno časnih funkcionalnosti v spletnih aplikacijah. ASP.NET 5 temelji na .NET Core izvajalnem okolju, vendar pa se, za boljšo združljivost, lahko izvaja tudi na polnem .NET ogrodju.

Z novimi arhitekturnimi spremembami, ki naredijo spletno ogrodje precej vitkejšo, vam tako ni več potrebno dodati reference na System.Web ter praktično vse funkcionalnosti so sedaj implementirane kot NuGet moduli, tako da lahko v svojo aplikacijo vključita samo tiste dele ogrodja, ki jih dejansko potrebujete.

Poleg tega, pa pridobite še kar nekaj dodatnih izboljšav:

· ASP.NET aplikacije lahko sedaj razvijate in izvajate na Windows, Mac in Linux platformah

· z uporabo .NET Cora dobite pravo 'side-by-side' verzioniranje

· nova razvojna orodja, ki poenostavljajo sodoben spletni razvoj

· skupno okolje za vse funkcionalnosti Web UIja ter Web APIja

· konfiguracijo bolj primerno oblačnemu okolju

· vgrajeno podporo za ustvarjanje in uporabo NuGet paketov

· vgrajeno podporo za injiciranje odvisnosti (DI)

· zmožnost gostovanja na spletnem strežniku ali pa direktno v svojem procesu

Končni rezultat je tako ASP.NET, ki ga dodobra poznate in ki je sedaj še bolj, kot kadarkoli prej, uglašen za sodobni razvoj spletnih strani.



Create site collection in target content database

clock August 23, 2014 20:02 by author Robi

I get a lot of question from my customers how to create site collection in specific content database. In Central Administration you do not have an option OOTB to create site collection and specify in which content database you would like to create it in.

This is the reason I created script that opens a form where you can select Web application, Managed Path, Content database, language and Template for a site collection.

You must enter data as seen on this screen shot.

Mandatory field are:

  1. managed path
  2. content database
  3. primary site collection admin
  4. site collection title
  5. site collection URL - note that this is just url after managed path
  6. language
  7. web template - it load templates based on language, compatibility level and not hidden.

***Note

This script supports creating site collection only with wildcard managed paths.

You can download a script here:

 

And I published the script to Office downloads and scripts as well.

 

Hope you like it.

CreateSiteCollectionInContentDb.ps1 (16.02 kb)



Poglejmo novi Azure portal

clock August 20, 2014 11:13 by author Rok Bermež

Novi Azure Preview portal smo si lahko prvič ogledali na konferenci // Build letos. Portal združuje vse vaše Azure vire v enotnen portal za upravljanje in omogoča enostavno gradnjo oblačnih aplikacij na Azure platformi z uporabo novega upravitelja 'Azure Resource Manager'ja (ki vam omogoča, da upravljate več Azure virov kot enotno aplikacijo).

Začetni predogled portala je podpiral Web Site, SQL baze, Storage storitve in Visual Studio online vire, v zadnjih mesecih pa se je tej ponudbi pridružila tudi podpora za Virtualizirano računalništvo. Omogoča nam ustvarjanje tako samostojnih VMjev (in PaaS storitev), kot skupine večih virov, ki jih lahko upravljamo kot eno logično enoto.

Prav tako lahko portal uporabimo za vpogled v obračunavanje in spremljanje naših virov ter si ga pri vsem tem lahko popolnoma prilagodimo za hitri ogled tistih podatkov, ki nas najbolj zanimajo.

Če ste obstoječi uporabnik Azure storitev ga lahko začnete uporabljati že danes na naslovu http://portal.azure.com.

Spodaj je slika novega portala v praksi. Armaturna plošča nam kaže stanje storitev v različnih regijah skupaj s podatki za obračun. Na desni pa se nahaja en sam VM imenovan »NTKVM« in s klikom na ploščico se prikaže nov »blade« (termin za poimenovanje stranskih oken, ki se odpirajo na desni strani) z dodatnimi pojasnili v zvezi z njim.

clip_image002



Novi Azure Cache

clock August 12, 2014 10:23 by author Rok Bermež

Prejšnji teden je Microsoft Azure (poleg obilice ostalih zadev) dobil tudi novo predpomnilniško storitev. Gre za Azure Redis Cache Service, ki strankam omogoča varno uporabo namensko postavljenega ‘Redis cachea’ (http://redis.io/), ki je upravljan s strani Microsofta. S to ponudbo, pridemo do bogatega nabora funkcij in ekosistema, ki nam jih ponuja Redis ter zanesljivo gostovanje in spremljanje od Microsofta.

Azure Redis Cache Preview je trenutno ponujev dveh stopnjah:

  • Basic – Zgolj en ‘Cache node’ (idealno za dev / test scenarije ter nekritične obremenitvame))
  • Standard – Repliciran ‘Cache’ (dva noda, Master in Slave)

Za čas predogleda bo storitev na voljo z zgolj 250 MB in 1 GB spominskega prostora z dvema brezplačnima instancama na eno naročnino.

Sama uporaba je izredno enostavna, vpišemo se v Azure Preview Portal in kliknemo na New -> Redis Cache (Preview):

clip_image001

Ko so vse ‘new cache’ možnosti nastavljene, kliknite na gumb Ustvari in počakajte nekaj minut, da se ustvari nova predpomnilniška instanca.

clip_image002

Razvijalci aplikacij lahko nato uporabijo širok spekter jezikov in njim ustreznih odjemalcev (http://redis.io/clients) za povezavo na Azure Redis Cache.

V našem primeru bomo za povezovanje uporabili .NET Redis klienta imenovanega StackExchange.Redis (https://github.com/StackExchange/StackExchange.Redis). V Visual Studiu enostavno s pomočjo NuGet package managerja dodamo StackExchange.Redis NuGet paket. Nato preko portala pridobimo povezovalne točke ter varnostni ključ

clip_image003

pa smo pripravljeni na delo in se lahko s spodnjo kodo priključimo na našo predpomnilniško instanco:

var connection = StackExchange.Redis.ConnectionMultiplexer.Connect("roktest.redis.cache.windows.net,ssl=false,password=...");

Ko je povezava vzpostavljena lahko dobimo referenco na Redisovo predpomnilniško bazo s klicom ConnectionMultiplexer.GetDatabase metode.

IDatabase cache = connection.GetDatabase();

Nato pa enostavo s pomočjo metod StringSet in StringGet beremo in pišemo vrednosti.

cache.StringSet("Key1", "Živjo svet!");
string value = cache.StringGet("Key1");

Več informacij pa lahko najdete na spodnjih povezavah:

  • Getting Started guide (http://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-azure-redis-cache/)
  • Documentation (http://msdn.microsoft.com/en-us/library/dn690523.aspx)
  • MSDN forum (http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=azurecache)


Sorting months chronologically and not alphabetically in a Pivot Table report based on Power Pivot data

clock May 13, 2014 18:41 by author Gašper

Here is our problem. When you create a Pivot Table in Excel, you can Group that field by month and the sort will be logical (January, February, …). But when you create a Pivot Table based on Power Pivot table, the grouping does not work! So you have to get to the month names by a different road. We do this by a Format function in PowerPivot, but the problem is that when you put this field in a Pivot Table, it gets sorted alphabetically. This is logical since the values are text and have nothing to do with dates as far as that Pivot Table is concerned, but this is a problem since the months are not sorted chronologically. This article will tell you how to achieve that.

Let's start with a simple table in Excel that has only two columns. One has Date values and the other has a number of visitors on that date. Now we would like to create a Pivot Table report to see how the number of visitors is spread through the months.

Case 1: Pivot Table report based on an Excel Table

First we create a Pivot Table based on an Excel Table

The Pivot Table will show the number of visitors by months. But to do this, since we only have Dates, we have to do Grouping by months on the Dates

And right away we get the desired result.

Case 2: Pivot Table report based on Power Pivot data.

First we add our Table data to Power Pivot the easiest way – by using the Add to Data Model command on the PowerPivot tab.

Now that we have the data in the Power Pivot we can create a Pivot Table report from Power Pivot window. But when we create a Pivot Table and want to see the analysis by months we see we just can't select the Group command. It is grayed out…

So to get to months we use a different trick, we go back to the Power Pivot window and create a calculated column using a Format function. This is a PowerPivot rendition of the Text function from Excel. The syntax is the same, just the names differ.

Excel Version

Power Pivot version.

Using the Format function we now get the month names and a new field to create a Pivot Report by. But when we create it, it looks quite disappointing.

So the numbers are OK, but the sorting is alphabetical and not the kind we want. To get the sorting right, we have to go back to the PowerPivot window and create a new calculated column using the Month function. This way we now get a month number along each date and month name.

Now just adding that to the Pivot Table report would get rid of our problem, but let's not forget that we want the month names as they were, only the sorting is wrong. But now we have all we need.

In the Power Pivot window, we select a value in the month name column and then select a Sort by Column command on the home tab and hey, look at that. You can now say that the Month name column will be sorted by Month No. column.

Doing that has changed our Pivot Report instantly

And we are one step closer to eternal happiness.

 

The most recent version of this post can be found here.



Windows Azure Mobile Services novosti

clock April 1, 2014 15:56 by author Rok Bermež

 

Pred kratkim so Windows Azure Mobile Services dobile podporo za .Net in ASP.NET Web API. Ta kombinacija je naredila gradnjo oblačnih mobilnih 'backendov' še toliko enostavnejšo.

Začnemo lahko enostavno tako, da gremo na azure management portal in naredimo nov Mobile service, pri tem pa zberemo .NET kot jezik uporabe.

clip_image002[4]

Ko bo servis narejen, pridemo do priročne začetne strani

clip_image004[4]

Kjer z klikom na gumb 'download' dobimo v naprej prirpavljen projekt temelječ na Web API predlogi z nekaj dodatnimi NuGet paketi.

clip_image006[4]

Če odpremo privzet ToDoItemConroller lahko vidimo kako se uporablja vgrajeni TableController<T> .NET razred, ki nam omogoča enostavno serviranje podatkov v mobilne aplikacije.

clip_image008[4]

Prav tako je omogočeno lokalno razhroščevanje in razvoj, ko pa smo z svojo storitvijo zadovoljni, jo pa enostavno objavimo na njeno oblačno mesto s pomočjo njenega 'publish' profila.

clip_image009[4]



The People picker and domain trusts

clock January 31, 2014 06:58 by author Robi

In the last couple of projects on SharePoint I was working with environments, where company was associated with couple of domains. Since SharePoint has to be configure to work in such environments, I decided to introduce the issue on two typical scenarios and explain in what way it is then necessary to set up a SharePoint site, to make it possible to resolve users in people picker from other domains.

Scenario 1 – "Two way trust"

The characteristic of "Two Way Trust" is that users from domain 1 have rights to access resources in domain 2 and vice versa. This means that users from a domain 1 can read data in the Active Directory of domain 2 and users from a domain 2 can read data from AD in domain 1. This is very important information, since configuration of SharePoint Server depends on these characteristic.

Figure 1 shows the scenario where we have set up 2 way trust between the domain 1 and domain 2. In domain 1 SharePoint Server is placed, which can be used by users from domain 1 and domain 2. The Problem, which in this case refers to the People picker is that people picker by default search only in Active Directory domain in which the SharePoint Server is installed. So, if you want to add users from other domains, we need to fix some of the settings on the SharePoint Server.

Settings that must be set in this case refers to the entire web application. Of course, these settings cannot be set in the user interface, so it is necessary in this case to open SharePoint 2013 Management Shell and enter a couple of lines of code.

$wa = Get-SPWebApplication http://portal.domena1.loc

 

# List the current values for a web application

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains

 

 

<# ====================================

 

Portal web app

 

===================================== #>

 

 

 

 

<####################################  

 

Domain 1

 

####################################>

$newdomain1 = new-object Microsoft.SharePoint.Administration.SPPeoplePickerSearchActiveDirectoryDomain

 

### #### ### Register FQDN ### ### ###

$newdomain1.DomainName ='domena1.loc ';

 

### #### ### Register netbios name ### ### ###

$newdomain1.ShortDomainName ='domena1';

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains.Add($newdomain1)

 

<####################################  

 

Domain 2

 

####################################>

$newdomain2= new-object Microsoft.SharePoint.Administration.SPPeoplePickerSearchActiveDirectoryDomain

 

 

### ### ### Register FQDN ### ### ###

$newdomain2.DomainName ='domena2.loc';

 

### #### ### Register netbios name ### ### ###

$newdomain2.ShortDomainName ='domena2';

 

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains.Add($newdomain2)

 

 

$wa.Update ()

For setting the "People Picker" control, first we need to save the web application object in a new variable. The current value of the variable can be listed with the following command $wa.PeoplePickerSettings. SearchActiveDirectoryDomains.

In order to be able to add new values to search other domains, we have to make a new object of type Microsoft. SharePoint. Administration. SPPeoplePickerSearchActiveDirectoryDomain, and then set the value. The first value is the FQDN of the domain and the second value is the NetBIOS name, for a short domain name. The same procedure must be repeated for all the domains that you want to allow in the people picker.

At the end we have to, of course, save the changes that are stored in the web application object. We have to repeat the procedure on all web applications on which we want to change the behavior of the people picker, including the Central Administration.

Scenario 2 – "a One way trust"

»One way trust" is a way of linking domains where one domain trusts another domain. For example, I used a one way trust between domain 1 and domain 2, where domain1 trusts domain 2, while domain 2 does not trust domain 1. To make it simple, we could say that the users from domain 2 can access data in the domain 1 while domain 1 users cannot access the data in the domain 2. Such a scenario is widely used among intranet and extranet environments, where users from the intranet domain can access extranet domain, while the reverse is not possible.

   

In a scenario where you use one way trust ", we can use the same script that we used in bidirectional trusts, with the difference that you need to provide some sort of access. These parameters are the username and password of the user who will be able to access resources in the domain 2.

 

<####################################

 

Creating a password app

 

####################################>

 

Stsadm –o setapppassword –password 'Pa$$w0rd'

 

# ========================================================

 

$wa = Get-SPWebApplication http://portal.domena1.loc

 

# List the current values for a web application

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains

 

 

<# ====================================

 

Portal web app

 

===================================== #>

 

 

 

 

<####################################  

   

Domain 1

                     

####################################>

$newdomain1= new-object Microsoft.SharePoint.Administration.SPPeoplePickerSearchActiveDirectoryDomain

 

### #### ### Register FQDN ### ### ###

$newdomain1.DomainName ='domena1.loc';

 

### #### ### Register netbios name ### ### ###

$newdomain1.ShortDomainName ='domena1';

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains.Add($newdomain1)

 

<####################################  

   

Domain 2

                     

####################################>

$newdomain2 = new-object Microsoft.SharePoint.Administration.SPPeoplePickerSearchActiveDirectoryDomain

 

### ### ### Accounta rights Settings ### ### ###

$user="domena2\accountPravice"

$pass=convertto-securestring 'Pa$$w0rd' –AsPlainText -Force

$credentials=new-object –typename System.Management.Automation.PSCredential –argumentlist $user,$pass

$newdomain1.LoginName=$credentials.UserName

$newdomain1.SetPassword($credentials.Password)

 

 

### ### ### Register FQDN ### ### ###

$newdomain2.DomainName ='domena2.loc';

 

### #### ### Register netbios name ### ### ###

$newdomain2.ShortDomainName='domena2';

 

$wa.PeoplePickerSettings.SearchActiveDirectoryDomains.Add($newdomain2)

 

 

$wa.Update()

 

Because in this scenario, the application pool account does not have rights to access resources that are in a domain 2, we need to set the username and password for access to Active Directory objects in domain 2.

The first step is to create the app password for storing credentials on all server in our farm. Command Stsadm –o setapppassword –password 'Pa$$w0rd' creates a registry key where app pool account can read credential key from.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\15.0\Secure

The next step is to create a connection for each domain specifically and also set up a user account for connecting to the specified domain.

For these purpose, you can use the account that synchronize data between SharePoint Server and Active Directory, provided that we have a SharePoint Server and the User Profile Synchronization Service. Otherwise, you can create normal service account, which will be exclusively used for reading data from the specified domain.

The last step is to set up application pool account permissions to read AppCredentialKey. AppCredentialKey registry key is limited to certain local server groups and if the permissions are not correctly set up, Event Logs are full of "Registry access is not allowed" errors and users are not getting resolved in the people picker. Users and groups that have access to registry key by default are:

  • System
  • Network Service
  • WSS_Restricted_WPG_V4
  • Administrators

In order to grant access for application pool to the AppCredentialKey, you can add service account to WSS_Restricted_WPG_V4 group. Accounts, which must be in this group are:

  • Application pool account for the web application
  • The Sandbox service account
  • The service account of the Central Administration – provided that it is not in the Administrators group.

SharePoint configuration can be quite challenging in some scenarios so I hope that this post will help SharePoint admins ease their work and enhance their understanding of SharePoint. This configuration also applies for SharePoint 2010.

 

Rob Vončina

SharePoint Server MVP



About the author

Rok Bermež is Slovenian Windows Azure MVP, he works as a Software Engineer and Microsoft Certified Trainer at Kompas Xnet. His primary interests include cloud computing and web development. With extensive experience in development and architecture he participated in many projects and since the CTP release of Windows Azure much of those projects are based on Windows Azure platform. Rok has been delivering courses, writing code and speaking at conferences and community events on Microsoft technologies for the last couple of years. You can also find him on Twitter (@Rok_B).

Month List

Sign In