Category: SharePoint


I saw lots of people confused how to get their Speedway\Summit\Jegs\Generic heavy duty signal switch to work with LED lights. Additionally no one seemed to know which flasher to use. Hopefully someone finds this helpful.

The signal switch is a great way to wire up turn signals and have a nice looking switch for your car. It’s easy enough to wire up with a little additional help of which flasher to use and how to wire it up. If you follow their instructions it won’t work and you’ve gone bonkers trying to figure it our and now you are here.

First which flasher; I used Novato EP35 electronic flasher. Heavy duty enough for 6 lights, and LED compatible. No resistors needed.

Now for the wiring part. It’s pretty simple. There are 3 pins on the flasher, 31, 49 and 49a.

31 – Chassis ground
49 – Fused from battery or fuse box
49a – Goes to the black wire on the signal switch

The blue wire is not used from the signal switch. Wire everything else as you need.

The black wire is the “power wire” for the switch to work correctly. There is no constant power wire. Technically speaking the flasher does all the work. The switch just sends the power to the correct bulbs.

Tip: If you are only wiring up turn signal bulbs or you have a three wire turn signal, do not wire in the red wire. The red wire is only to be used if you have a two wire brake light.

Using SharePoint Online (SPO) is great for the organization. It simplifies storage and farm management. However as SharePoint usage grows so do all the service requests. And with self service you are no longer the site collection admin. Inline with my previous post about assign an admin account as the site collection admin you can also assign a group to be site collection admin. Assigning a group must be done after the site is created. Therefore you can just stick this line into the previous post instead of the admin user. And it’s quite easy.

But, yes there’s a but cause this is Microsoft after all. To assign the group via SPO you need the SPO GUID. So to get that go to a site collection and go to Site Settings, then Site Permissions. Click Check Permissions and put in the group name. You will be output with the group name and the SPO tenant GUID (see below). Using the entire tenant GUID you can assign the group as site collection admins. Using a group also simplifies who has access to all site collections. You don’t have to share admin passwords and can add\remove people quickly.

Permission levels given to Group-SharePoint_Admin (c:0t.c|tenant|abf06f54-4a92-4we7-df41-fdf7f0d3ff92)

 

#set ADGroup Variable
$ADGroupGUID = "c:0t.c|tenant|abf06f54-4a92-4we7-df41-fdf7f0d3ff92"

#Assign group as site collection admin
Set-SPOUser -Site $SPSite -LoginName $ADGroupGUID -IsSiteCollectionAdmin $true

 

 

Ahhhh, SharePoint online. None of it makes sense from an admin perspective, but thankfully there’s PowerShell to manage it. One of the things you might quickly find out is that sites created through Office365 groups or Teams do not show up in the admin panel of all sites. They are these weird ghosted sites. But typically we want a service \ admin account to be a site collection owner on them so we can still manage them for whatever your business reason. I’ve written this little cheater script to auto-add a service account to all SPO sites. I run this every 12 hours, so even if someone removes the account it gets added back, plus it’ll find all the new self-service sites that are created, which if you are using O365 groups or teams then it could be a few a day depending on the size of your org.

I run this script out of Azure serverless. You can also run it from a windows server, but you must have SharePoint Online Management PowerShell installed.

The only variables you should need to enter are in brackets on lines 1, 2 & 7.

SPO documentation: https://docs.microsoft.com/en-us/powershell/sharepoint/sharepoint-online/introduction-sharepoint-online-management-shell?view=sharepoint-ps

Download for SPO management shell: https://www.microsoft.com/en-us/download/details.aspx?id=35588

Notes: The account password must be accessible to the file; so you can store it in the file,  enter it every time, or use some sort of retrieval method. We use Keeper commander to get the password.


#The only variables you should need to enter are in brackets on lines 1, 2 & 7.
#Set variables for tenant and service account
$orgName="[yourO365tenant]"
$adminUPN="[yourserviceaccount]@$orgName.onmicrosoft.com"

#yes you have to store the password in clear text, not my idea, blame Microsoft
#We use Keeper Commander to pull the password using the API Token. I've removed that code.
$adminPW = ConvertTo-SecureString -String "[password]" -AsPlainText -Force

#creates the login cred
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $adminUPN, $adminPW

#Connect to SPO Service
Connect-SPOService -Url https://$orgName-admin.sharepoint.com -Credential $cred

#This gets all the site except personal sites (aka mysites)
$SPOSites = Get-SPOSite -Includepersonalsite:$false

#loop through all sites and set the site collection admin to the SharePoint admin account if not already
foreach($SPSite in $SPOSites){

#Adds SharePoint admin account above as site collection admin
Get-SPOSIte $SPSite
Set-SPOUser -Site $SPSite -LoginName $adminUPN -IsSiteCollectionAdmin $true

}

First, Because Math. Remember this.

Now on the real subject. How to actually setup Distributed Cache and Security Token Service in SharePoint with SAML (ADFS) with Load Balancing across a 6 tier farm. There is a literal ton of information on these services. Problem is none of it is fully put together to understand what is what and how it affects your farm, your users and your sleep. For what it’s worth I will cite the appropriate people as your information finally lead me to the working configuration.

First up is Distributed Cache (DC). First, make sure you have AppFabric CU5 or higher installed. It is not rolled up with SharePoint or Windows updates so you have to get it yourself. First make sure you understand the reason behind DC. It saves the login token so that your user doesn’t have to login each time. This applies to Forms and SAML Claims users. First thing to do is find out which servers need to run DC and AppFabric. First go to a WFE server and run Get-CacheHost, you might need to use Use-CacheCluster command first. Then run

Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status

Those commands are thanks to Samuel Betts at <http://blogs.msdn.com/b/sambetts/archive/2014/03/19/sharepoint-2013-distributed-cache-appfabric-troubleshooting.aspx>. I highly suggest you read that post before continuing reading this.

Do Not run DC on a search crawler. Bad things will happen, like AppFabric service crashing and blowing out all the other cache databases. You have been warned.

Now there is a very important thing you need to do before continuing. You need to login to each server that is running a WFE or Search Query. You will need to run Add-SPDistributedCacheServiceInstance – which adds the server to AppFabric and SharePoint both. Then run

Set-NetFirewallRule -DisplayName "File and Printer Sharing (echo request - ICMPv4-In)" -Enabled True

This information is from Sahil Malik <http://www.codemag.com/Article/1309021>. Why is it needed? Sahil describes it very well, and you can read his post.

Now you are sort of setup, maybe. So have you configured your Trusted Identity Token Issuer? No?, go do that then come back.

Now we need to setup DC and STS. First there is a DC Bug <http://habaneroconsulting.com/insights/SharePoint-2013-Distributed-Cache-Bug>. Read his post to get more information, however some of his information is a little off, specifically MaxConnectionsToServer. This property specifies how many connections are allowed to the Cache server to check for a login, which sounds like you want it high to accommodate the number of users you have. However M$ support informed us after three weeks that this value is limited to the number of processors you have. So keeping the default value of 2 is probably best. However if you have a beefier machine with more processors then up this. If  you set this higher then you will get intermittent hangs and high memory usage.

RequestTimeout = # of milliseconds to wait to find logon token. This is not your friend in SAML Claims. Setting this higher to say, 10 seconds seems to be good. Basically you want to avoid going to local cache as this almost always makes a reauth.

The other thing to consider here is your load balancer. Do you use persistence or not? F5 says to not use persistence for SP2013. Supposedly DC works good enough, of course if it did this blog post wouldn’t exist, nor would the others I linked to. So I caution you that if you don’t user persistence, you better get this right or you could fall into a bad reauth cycle.

The code below only needs to run once. Though it wouldn’t hurt to check that they received the update with the Get command below. Then restart the service on each server.

$timeout = 10000
$maxConnections = [Max # Processors per WFE]

$DLTC = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$DLTC.RequestTimeout = $timeout
$DLTC.ChannelOpenTimeOut = $timeout
$DLTC.MaxConnectionsToServer = $maxConnections
Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $DLTC
Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache

$DLVSC = Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache 
$DLVSC.ChannelOpenTimeOut = $timeout
$DLVSC.RequestTimeout = $timeout
$DLVSC.MaxConnectionsToServer = $maxConnections
Set-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache -DistributedCacheClientSettings $DLVSC
Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache 

# This should run [Restart-Service -Name AppFabricCachingService] on each cache host
Restart-CacheCluster

Don’t use Session Cookies unless you have a documented security reason. Otherwise your users will hate you. You can change it, but I highly suggest you don’t. Now on to more math. MaxServiceTokenCacheItems and MaxLogonTokenCacheItems are in memory tokens given by the SP STS, per server. You want this to match the MaxConnectionsToServer setting if your load balancer is running persistence. If not you might need a higher value. The Service Token is used whenever your user hits the Search Query or other service application service. The LogonTokenCacheExpirationWindow is used for sliding sessions. Again more math.

The code below only needs to run once on the farm. However you will need to iisreset each server.

$maxTokens = [# of concurrent user / WFEs]
$sts = Get-SPSecurityTokenServiceConfig
$sts.UseSessionCookies = $false
$sts.MaxServiceTokenCacheItems = $maxTokens
$sts.MaxLogonTokenCacheItems = $maxTokens
$sts.LogonTokenCacheExpirationWindow = (New-TimeSpan -second [MATH])
$sts.Update()
iisreset

After I got this all configured and run an 8 hour test, the farm performed great and as we had originally expected. As always your value may very.

Now I need a beer and cigar. Seriously though I hope this helps someone and keeps the frustration down.

So I’m sure I’m not the only person who needs to do site backups. So we needed to schedule site backups every night and I’m very lazy. I don’t want to think about adding a new site collection to a script or csv file. I need to be sure all the backups are date stamped. I need to be able to get all site collections that match a certain identity and skip all the mysite collections. So using regex and ForEach I can backup which ever site collections I want.

First add the snapin and set our date string

Add-PSSnapin "Microsoft.SharePoint.PowerShell"

$DateStr = (Get-Date -Format yyyy_MM_dd)

 

The regex here is matching all identities of (sites|search), which are the only site collections that will be selected for backup.

Get-SPSite -Identity "http://local.contoso.com/(sites|search)/.+" -Regex | ForEach-Object {

 

The internals of the ForEach. Basically I want to get the ServerRelativePath and replace the “/” with “-“. Then put my path together with that, the date and backup folder and then back it up.

$SiteName = $_.ServerRelativeURL -replace '(/)','-'

$path = ("D:\Backups\" + $DateStr + $SiteName + ".bak")

Backup-SPSite -Identity $_.ID -Path $path

}

 

 

NOTE: Microsoft is aware of this issue and are currently working on resolving it on their end.

With that in mind, if you have users that can’t login to Office 365 because they have already created a Microsoft LiveID with their email address, or UPN, don’t fret. You can run a simple PowerShell script in Azure Active Directory Manager to get them fixed. In our testing this script worked 100% of the time.

Basically what happens is, when the user is created via Dir Sync (aka FIM) the user account (UPN), for whatever reason, is not taken control of by Office 365. So when the user tries to login to Office 365, the O365 login gets confused with two identities and doesn’t let them login. So when you run the PS script in the Azure PowerShell it changes the UPN of the user to the onmicrosoft tenant then flips them back to your domain. When it flips back to your domain then Azure goes and takes control of the account across all Microsoft spaces.

First step get your O365 Creds

$cred = get-credential
Connect-MsolService -credential $cred

Now flip your user and gain full control of their account

$name1 = "user@[domain.com]"
$name2 = "user@[tenant].onmicrosoft.com"
Set-MsolUserPrincipalName -UserPrincipalName $name1 –NewUserPrincipalName $name2
Set-MsolUserPrincipalName –UserPrincipalName $name2 –NewUserPrincipalName $name1

Apparently Amazon doesn’t think that people want to know where their stuff is coming from or how long it’s going to take to get there. I highly disagree. Recently I bought some items from Amazon and had I known that they would be shipped four business days later and take five days to get to me (a total of nine business days) I would have gladly paid the extra $5 from another seller that was located much closer.

Maybe some customers doesn’t care how long it takes and they want the cheapest. However what if a customer has one scheduled free day in their busy life schedule and don’t want to pay exorbitant expedited shipping costs. We should be able to tell from the product page that the item is coming from CA or MD. eBay tells me this information, Amazon should too. In addition to this there should be another piece of information that tells how long until that seller normally ships, which is information Amazon already has.

Seller  Location Price Avg. Time To Ship (days) Days to Receive in CA once shipped Total days to receive Cost increase to get it faster
A MD  $    64.99 3 5 8  $                      –
B NY  $    68.99 1 5 6  $                 4.00
C FL  $    65.99 2 4 6  $                 1.00
D NV  $    69.99 1 2 3  $                 5.00
E CA  $    67.99 3 2 5  $                 3.00
F TX  $    67.99 3 3 6  $                 3.00

In the above table you can see that the prices here are very similar. However the seller that is furthest away from me is also the cheapest and has the longest time until delivery. Where as the seller in my own state is marginally higher, and the seller with the highest price is just one state away. Given UPS standards of shipping we can calculate that time until I would actually receive the items. In this matrix I would have been willing to pay the extra $3 or $5 to get my item before the day I have scheduled to do the work I want to do. Instead I will have to reschedule to now three weeks later cause I’m busy enough I don’t have time to do it any other time. At this point I could have bought it from a local seller, which contributes to my local economy and waited the extra week I’m having to wait anyway.

The point is that the consumer should be given the information to make their own decision and appropriate decision at that. Don’t withhold information that can help a customer make a better decision for themselves.

So we just had this interesting problem where our timer job lost permissions, not sure why yet, and nothing would run, including the problem that lead us to that which was User Profile Sync (UPS). After fixing the timer job issue UPS wouldn’t start correctly. Well turns out there is a bug that carried over from SP2010 where if the UPS can’t find a valid certificate it will try to create a new one. Since we tried to start the UPS before fixing the timer job permission error UPS tried to create a new certificate because it couldn’t find the existing one. So then we tried to start UPS and received another error. Now we had two valid certificates and UPS was unhappy. So then we had to delete the older certificate and restart UPS. Now it’s running as we expect. Just in case anyone runs into UPS problems, you can bet with 90% probability that your problems are permission related, the other 10% is something that happened because you had permission problems and now you need to fix that.

So Blitz, in what appears to be a drunken Sake decision, decided to show boost in their DTT (dual turbo timer) in hkPa, or hecto-kilo Pascals. Problem with this is no one uses hkPa except them and it isn’t really a standard measure of pressure, per say. In fact it must be something they sort of made up to be cool and different, because the figure they use only sort of makes sense. First it is not a HectoPascal (hPa, 100 pascals), otherwise 1 would equal .014503774 PSI. It also is not a KiloPascal (kPa, 1,000 pascals) which equals .14503774 PSI. But you know what does equal 14.5 PSI, 1 bar or 100,000 pascals, and when you multiply hPa to kPa you get 100,000, or 1 bar. So why didn’t they call it bar, again one can only guess. But now you now more then you ever wanted to know about it, and how the math works that 1 hkPa (or bar) equals 14.5 PSI.

TL;DR, really that lazy huh? Well don’t blame me when you blow up your motor for not understanding the math behind making your engine run.

Multiply the boost readout by 14.5 and you’ll get PSI. Here’s a little cheat table I put together in Excel.

PSI KPA PSI KPA PSI KPA    
0.5 0.03 9 0.62 17.5 1.21
1 0.07 9.5 0.66 18 1.24
1.5 0.10 10 0.69 18.5 1.28
2 0.14 10.5 0.72 19 1.31
2.5 0.17 11 0.76 19.5 1.34
3 0.21 11.5 0.79 20 1.38
3.5 0.24 12 0.83 20.5 1.41
4 0.28 12.5 0.86 21 1.45
4.5 0.31 13 0.90 21.5 1.48
5 0.34 13.5 0.93 22 1.52
5.5 0.38 14 0.97 22.5 1.55
6 0.41 14.5 1.00 23 1.59
6.5 0.45 15 1.03 23.5 1.62
7 0.48 15.5 1.07 24 1.66
7.5 0.52 16 1.10 24.5 1.69
8 0.55 16.5 1.14 25 1.72
8.5 0.59 17 1.17        

We are in the midst of building a custom portal for SCSM, contact us if you’d like to purchase it from us. Sales pitches aside Trying to get multiple pieces of information and sort it from SCSM isn’t easy. First of all you have to deal with the new Criteria garbage to query SCSM, as if LINQ or CAML wasn’t good enough, let’s create a brand new querying language that only product will use. Seriously Microsoft, get some unity already. I digress….

Getting incidents is pretty simple, however using that object isn’t. Rob Ford does a great job at trying to help understand it (http://scsmnz.net/c-code-snippets-for-service-manager-1/). That said, how do you “order by” or get more information like Status? It isn’t “easy peezie lemon squeezie” but you should be able to make sense of it. This took a few days to put together from a few different sources, so hopefully this will help somebody.

These code sample are in order, so copy each section and paste then your controller should work.

This should look familiar. We get our management packs and our user and build our first Criteria.

 var strSetting = ConfigurationManager.AppSettings["Group"];<br /> EnterpriseManagementGroup mg = new EnterpriseManagementGroup(strSetting);</p><p>//get our user<br /> var strUserName = GetIdentityUsername();<br /> //build criteria to get incidents where user is AffectedUser<br /> var strCriteria = String.Format(@"$Context/Path[Relationship='WorkItem!System.WorkItemAffectedUser' TypeConstraint='System!System.Domain.User']/Property[Type='System!System.Domain.User']/UserName$ Equal" + strUserName + @"");</p><p>//Get the management pack for Incident and projection<br /> ManagementPackClass mpcIncident = mg.EntityTypes.GetClass(new Guid("a604b942-4c7b-2fb2-28dc-61dc6f465c68"));<br /> ManagementPackTypeProjection mptpIncident = mg.EntityTypes.GetTypeProjection(new Guid("1862825e-21bc-3ab2-223e-2a7f2439ba75"));<br /> ManagementPack mpIncidentLibrary = mg.ManagementPacks.GetManagementPack(new Guid("DD26C521-7C2D-58C0-0980-DAC2DACB0900"));<br /> ObjectProjectionCriteria opcIncidents = new ObjectProjectionCriteria(strCriteria, mptpIncident, mpIncidentLibrary, mg);

The tricky part is order by. Well there is another new method call ObjectQueryOptions. It is some loose form of XML where you add some references to the management packs that you want to user to get data and how to setup the QueryOptions. the IObjectProjectionReader is the same but you add the new Query Options Object.

var strQuery = String.Format(@"&lt;Sorting xmlns=""http://Microsoft.EnterpriseManagement.Core.Sorting""&gt;<br /> &lt;Reference Id=""System.WorkItem.Incident.Library"" Version=""7.5.3079.0"" Alias=""WorkItem"" PublicKeyToken=""31bf3856ad364e35""/&gt;</p><p>&lt;Reference Id=""Microsoft.Windows.Library"" Version=""7.5.3079.0"" Alias=""WinLib"" PublicKeyToken=""31bf3856ad364e35""/&gt;<br /> &lt;SortProperty SortOrder=""Ascending""&gt;$Context/Property[Type='WorkItem!System.WorkItem.Incident']/Priority$&lt;/SortProperty&gt;<br /> &lt;SortProperty SortOrder=""Descending""&gt;$Context/Property[Type='WorkItem!System.WorkItem.Incident']/Id$&lt;/SortProperty&gt;<br /> &lt;/Sorting&gt;");<br /> ObjectQueryOptions orderedIncidents = new ObjectQueryOptions();<br /> orderedIncidents.AddSortProperty(strQuery, mptpIncident, mg);</p><p>IObjectProjectionReader&lt;EnterpriseManagementObject&gt; oprIncidents = mg.EntityObjects.GetObjectProjectionReader&lt;EnterpriseManagementObject&gt;(opcIncidents, orderedIncidents);

So now you have your data, but it’s all an object and you want to show the priority to the user. Well half of the fields in SCSM are relationships. So you have to setup a new Criteria and get a single workitem, for which you will need a foreach loop to get the status for each. Notice we are using a datatable which we will pass through to the View.

//create datatable to pass to view<br /> DataTable dt = new DataTable();<br /> DataRow dr = null;</p><p>dt.Columns.Add(new DataColumn("Incident", typeof(string)));<br /> dt.Columns.Add(new DataColumn("Status", typeof(string)));<br /> dt.Columns.Add(new DataColumn("Last Modified", typeof(string)));</p><p>foreach (EnterpriseManagementObjectProjection emopIncident in oprIncidents)<br /> {<br /> dr = dt.NewRow();<br /> dr["Incident"] = emopIncident.Object.DisplayName;</p><p>//This is the work item (such as a service request) ID that we're looking for<br /> String workItemId = emopIncident.Object.Name;</p><p>//Setup the criteria. This will instruct service manager to "Get me the incident request with Id: IR{0}"<br /> //Get the system.workitem class<br /> ManagementPackClass mpcWorkitem = mg.EntityTypes.GetClass(new Guid("f59821e2-0364-ed2c-19e3-752efbb1ece9"));</p><p>//Get system.workitem.library mp<br /> ManagementPack mpWorkitem = mg.ManagementPacks.GetManagementPack(new Guid("405d5590-b45f-1c97-024f-24338290453e"));</p><p>string strIncidentSearchCriteria = "";</p><p>//Attempt to get results for single workitem<br /> strIncidentSearchCriteria = String.Format(@"&lt;Criteria xmlns=""http://Microsoft.EnterpriseManagement.Core.Criteria/""&gt;" +<br /> "&lt;Expression&gt;" +<br /> "&lt;SimpleExpression&gt;" +<br /> "&lt;ValueExpressionLeft&gt;" +<br /> "&lt;Property&gt;$Context/Property[Type='System.WorkItem']/Id$&lt;/Property&gt;" +<br /> "&lt;/ValueExpressionLeft&gt;" +<br /> "&lt;Operator&gt;Equal&lt;/Operator&gt;" +<br /> "&lt;ValueExpressionRight&gt;" +<br /> "&lt;Value&gt;" + workItemId + "&lt;/Value&gt;" +<br /> "&lt;/ValueExpressionRight&gt;" +<br /> "&lt;/SimpleExpression&gt;" +<br /> "&lt;/Expression&gt;" +<br /> "&lt;/Criteria&gt;");</p><p>EnterpriseManagementObjectCriteria emocWorkitem = new EnterpriseManagementObjectCriteria((string)strIncidentSearchCriteria, mpcWorkitem, mpWorkitem, mg);<br /> IObjectReader&lt;EnterpriseManagementObject&gt; readerWorkitem = mg.EntityObjects.GetObjectReader&lt;EnterpriseManagementObject&gt;(emocWorkitem, ObjectQueryOptions.Default);<br /> EnterpriseManagementObject emoWorkItem = readerWorkitem.ElementAt(0);

Great I have my single workitem, now I need to get the status. Remember everything is a relationship. So you will have loop through the Status List until it matches and then output that. However if a status is empty it will throw an exception, so try catch to suppress errors. Then finish up the rest and add the row to the table.

String workItemStatus = "";</p><p>//you need a try catch for when the status is empty.<br /> try<br /> {<br /> //Get Status DisplayName for title<br /> Guid gStatusCategory = new Guid("5e2d3932-ca6d-1515-7310-6f58584df73e");</p><p>foreach (ManagementPackEnumeration mpeClass in mg.EntityTypes.GetChildEnumerations(gStatusCategory, TraversalDepth.Recursive))<br /> {<br /> if (mpeClass.Id.ToString() == emoWorkItem[mpcIncident, "Status"].Value.ToString()) ;<br /> {<br /> workItemStatus = mpeClass.DisplayName;<br /> break;<br /> }<br /> }<br /> }<br /> catch (Exception ex)<br /> {<br /> //workItemStatus = ex.Message + " | " + ex.Source;<br /> }</p><p>dr["Status"] = workItemStatus;<br /> dr["Last Modified"] = emopIncident.Object.LastModified.ToLocalTime();<br /> dt.Rows.Add(dr);<br /> }</p><p>return View(dt);