Wi-Fi DoS by RF-Jamming from faulty Intel Driver


It all started with people telling me that when they were gathered in a meeting room, everybody loses their connection to the Wi-Fi. This happened sporadically and everyone got disconnected at the same time. Obviously the AP and the Wi-Fi were thought of being the source of this problem, so I started my analysis.
After some digging and frame capturing, it showed that during the occurrence of this issue, no Wi-Fi frames were transmitted on that given channel for about 10 seconds, not even Beacon Frames.

As an example below a Wireshark IO Graph with all the Beacon Frames my sniffer received on that affected channel during the issue:

Wireshark IO Graph with display filter “wlan.fc.type_subtype == 0x0008”

There was no logical explanation for me, so I started up Ekahau and the Sidekick to look at the RTFM (real time frequency monitor), and the following happening revealed itself:

RF-Jamming on Channel 36+ / 36w

As you can see, there is something that interferes with the Wi-Fi and jamms up the whole channel for about 10 seconds. Upon further investigation, it was noticeable that not the AP was the source of that interference, but one of the notebooks in the room.

Reproduce Issue

It seemed that the RF-Jamming was happening mostly after the notebooks were woken up from some power saving or hibernation modes. I looked for a way to reproduce it and was able to find the following process to do so 100% of the times:

1. Disable Wi-Fi
2. Enable Wi-Fi
3. Connect to Wi-Fi
4. Go to https://www.speedtest.net and start a speedtest
5. During the speedtest, the Wi-Fi NIC crashes and the RF-Jamming happens. Most of the times, the speedtest website crashes with this error message

Please note: During the RF-Jamming, the notebook reinitializes the Wi-Fi card (happens automatically) and reconnects to the Wi-Fi. After that, the issue does not occur any more when you start the speedtest again. It looks like the reason for this is that after the reinitialization, the notebook does not use TXOPs with CF-End frames any more (see section “Setup that causes issue” for more details).

But when you follow the steps above, you can reproduce it again.

Observe Issue

You can use different methods to observe the test above.

  • Frame Capture next to the notebook that causes the RF jamming (see section “Introduction”)
  • Spectrum Analysis next to the notebook that causes the RF jamming (see section “Introduction”)
  • Ping from or to a client within the affected BSA (Basic Service Area)
PING loss during RF-Jamming
  • Check Eventlog from the notebook that is causing the issue. There is a Netwtw06 5007 Error “5007 – TX/CMD timeout (TfdQueue hanged) and a 10400 NDIS Warning that indicates that the Wi-Fi NIC got reinitiated.
Event ID 5007 – “5007 – TX/CMD timeout (TfdQueue hanged)”

Setup That Caused Issue

With a lot of “try and error”, I tried to narrow down the setup and combination of variables that causes this behaviour. Please note that this is based on my lab results with the equipment I had available, so your situation may vary (especially the part “Wi-Fi Specific Characteristics”). You are more than welcome to ping me with your results, and I will update this post accordingly.

Notebook Characteristics

  • WiFi Card: Intel® Dual Band Wireless-AC 8265
  • Driver Version: 20.50.0.X

Wi-Fi Specific Characteristics

  • SSID with 802.11ac (VHT) capabilities
  • Coding: BCC
  • Notebook using TXOPs with CF-End Frames
During a TXOP, the transmitter (Notebook) claims the channels’ airtime by setting a high duration in the duration field of its Frames.  After the TXOP, the transmitter resets the NAV timer by sending out a CF-End Frame with the duration of “0”.

Setup That Caused Issue

I found two ways to remedy this problem:

  • Update Driver: The simplest way to solve this problem is to upgrade the  Intel® Dual Band Wireless-AC 8265  to a version other than 20.50.0.X. However, if you have hundreds of clients with the faulty driver version, this might be a drag and needs proper planning and execution
  • Switch coding to LDPC: An alternative solution I found working for myself is to switch the coding from BCC to LDPC on the AP/radio. Please note that LDPC is only an optional feature in the Wi-Fi Certified ac program (and Wi-Fi Certified n for that matter), so there might be clients that have issues with that.

As always, please don’t hesitate to hit me up if you have any questions or different results with this topic

Cheers, Renzo

Lab Time: The Effect Of Rate Limiting On Wi-Fi


This post is about the effect of rate limiting on Wi-Fi and is inspired by the presentation Troy Martin (@troymart) gave at WLPC 2018 Prague (Link) and his recent appearance in the Clear To Send (@cleartosend) podcast episode 157 (Link)

Especially his comparison between “configuring rate limiting” vs. “disabling high data rates” was very interesting to me, and I wanted to capture and compare the effect of both of them from a L1 and L2 perspective in my lab.


When a Wi-Fi client wants to transfer a file, it tries to max out the speed available up to the point where it hits a limit.
In Wi-Fi this limit or “bottleneck” is often one of the following:

  • Airtime
  • WAN Link

So what should you do if you’re only able to get a 10Mbit/s WAN link, but you have APs that support data rates up to 400Mbit/s? In this example, the bottleneck will almost certainly be the WAN link.
How can you make sure users on the guest Wi-Fi do not use up your entire WAN throughput, and drain your corporate devices?

Option 1 – Disable High Data Rates (BAD!)

From the problem mentioned above, one solution might sound logical at first: why don’t we just disable every data rate on the guest SSID except 6Mbit/s, so there will always be 4Mbit/s left for corporate devices?
There are two fundamental mistakes in this thought:

  • Because of the nature of Wi-Fi, the raw connection rate is not equal to the throughput you will get. There are a lot of other frames getting transmitted that take up airtime (management & control frames) in addition to the whole arbitration process that needs to take place (Link). A rule of thumb is that you can get around 50% throughput out of your connection rate (there are a lot “ifs” that you need to consider, so please don’t take this for a fact, more as an approximate reference value). So if you are connected on your 6Mbit/s-Only SSID, you’ll get a throughput of roughly 3Mbit/s.
    So then just enable data rates up to 12Mbit/s to get the mentioned 50% throughput of 6Mbit/s, right?
  • By using these low data rates only, you use up all the airtime with a single SSID. The corporate SSID running on the same AP (same channel) would be affected as well. Think of a crossroad (“channel”) with two lanes (“SSIDs”), and you want to regulate the flow of the first lane (“Guest SSID”) in order to guarantee the second lane (“Corporate SSID”) to have a decent flow (“throughput”). To solve this issue, you would not make the following proposal:

“To limit the amount of cars (“data sent over Wi-Fi”) from the first lane (“Guest SSID”), they are only allowed to drive over this crossroad with 6 miles per hour (“send data over the air with 6Mbit/s”). Meanwhile, the second lane (“Corporate SSID”) is still allowed to drive over this crossroad with 400 miles per hour (“send data over the air with 400Mbit/s”).”

Does that limit the speed of the Guest SSID? Certainly! But in addition to limiting the speed of the Guest SSID, you have also saturated the channel for everybody else (“blocked the crossroad for the other lane, because of the slow speeds”).

Option 2 – Configuring Rate Limiting on SSIDs (BETTER!)

The following proposal would be much better:

“Both lanes (“SSIDs”) are allowed to drive over the crossroad (“channel”) with 400 miles per hour (“send data over the air with 400Mbit/s”). However, the first lane (“Guest SSID”) has a speed limit behind the crossroad (“rate limit on the AP”). The intelligent drivers (“TCP packets window size”) from the first lane (“Guest SSID”) can see that they can’t just drive over the crossroad in a high interval, and therefore adjust their interval and wait on their site of the crossroad to not block it. During these waiting periods, the crossroad is available for the second lane (“Corporate SSID”).

As you can see, by rate limiting on the AP instead of disabling high data rates, you do not hog the channel and do not use up all the airtime, which allows other SSIDs or APs to operate on the same channel.
If you want to get more information about airtime, I recommend you to watch Andrew Von Nagys (@revolutionwifi) “High-Density Wi-Fi Design

Lab Time

The concepts mentioned above are easy to demonstrate when you do a spectrum analysis and frame capture during iperf tests.
I have configured the following two scenarios:

  • Test 1: TCP upload on 802.11g AP with 5Mbit/s rate limiting configured
  • Test 2: TCP upload on 802.11g AP with highest allowed data rate configured to 12Mbit/s

When you compare the output of the spectrum analyzer, you can see that the airtime is a lot less used with the rate limiting in place:

Spectrum analysis Test1

Spectrum analysis Test2

By looking at the frame captures, you can see that the airtime utilization is less than half when you use rate limiting:

Airtime Utilization Test1

Airtime Utilization Test2

When you put this into numbers, you can see with rate limiting, the client was able to upload 40.1MB while only occupying the channel for ~9.8s (QoS Data & ACKs):

Airtime & Bytes Test1

Meanwhile, with only low data rates allowed, it took the client ~23.9s to upload 31.3MB:

Airtime & Bytes Test2

When you compare 2 Data Frames from the tests you can see that they are both the same size, but do not take the same time to get transmitted over the air due to the difference in the data rate:

Data Frame Test1

Data Frame Test2

Limits Of Rate Limiting: UDP

But there is a limit when it comes to rate limits (pun intended) and preserving airtime:

Rate limiting works well if TCP is used as a layer 4 protocol. TCP is connection-oriented and tries to find the sweet spot of throughput with the window sizes. UDP however is not connection-oriented, which means that it does not care if the receiver gets the packets or not. The “weakest link” between sender and receiver will drop the packets if there is too much traffic as shown in the following example:

A client with a Wi-Fi connection rate of 54Mbit/s uploads a 25Mbit/s UDP stream, so it sends its frames out in the air using up all the airtime. The AP gets it, but can’t forward the frames due to the 5Mbit/s rate limiting. In the process, the AP drops 20Mbit/s (80%) and only the 5Mbit/s (20%) are forwarded.

If the same example is reversed (25Mbit/s UDP download), the AP will drop the frames too, but this time before it gets sent out and consumes all the airtime.


Please keep in mind that this post is about the idea of rate limiting in regard to bottlenecks.
I have talked with some people that recently rolled out brand new APs with only data rates up to 24Mbit/s enabled. There was a good reason to do so: it was a warehouse installation where only scanners were connected to the Wi-Fi and they had issues with dynamic rate switching. So remember that this warehouse installation was about stability and not about having throughput bottlenecks.

Please also remember that rate limiting should only be used as a last way out, because it may lead to poor end user experience. Details about considerations and how to tweak it appropriately are discussed in the Clear To Send Episode 157 (Link)

Cheers, Renzo

802.11 OFDM Data Rates – The Math Behind The Numbers


As a WiFi admin, you are probably familiar with WiFi Data Rates and the concept of DRS (Dynamic Rate Switching/Shifting/Selection).
Numbers like 6Mbit/s, 54MBit/s, 144MBit/s, 300MBit/s and 400MBit/s all look pretty familiar – but what are they based on?
The numbers are not random at all – the data rates of OFDM PHYs (802.11a, 802.11g, 802.11n, 802.11ac) are all based on the same single formula, which consists of 5 variables:

  • Data Subcarriers (number of subcarriers that transmit modulated data)
  • Modulation (amount of bits each data subcarrier can represent; predefined mix and match possibilities with coding)
  • Coding (defines how much of the modulated data is useful data and how much of it is for error correction; predefined mix and match possibilities with modulation)
  • Spatial Streams (number of unique MIMO data streams that can be sent in parallel)
  • Symbol Interval Time (the sum of Data Interval Time + Guard Interval)

I’d like keep this blogpost short and sexy, and just show how you can calculate the different data rates based on these numbers.
In the following days and weeks, I’m going to publish additional blogposts, where I’ll explain the different variables thoroughly, and link them in this blogpost.


With the variables described above, the following formula is used to calculate OFDM data rates:

Data Subcarriers * Modulation * Coding * Spatial Streams / (Data Interval Time + Guard Interval) = Data Rate in Bits/Microsecond = Data Rate in Mbit/s

Possible Values Of Variables





Data Subcarriers









Modulation & Coding

BPSK 1/2

BPSK 3/4

QPSK 1/2

QPSK 3/4

16QAM 1/2

16QAM 3/4

64QAM 2/3

64QAM 3/4

BPSK 1/2

BPSK 3/4

QPSK 1/2

QPSK 3/4

16QAM 1/2

16QAM 3/4

64QAM 2/3

64QAM 3/4

BPSK 1/2

QPSK 1/2

QPSK 3/4

16QAM 1/2

16QAM 3/4

64QAM 2/3

64QAM 3/4

64QAM 5/6

BPSK 1/2

QPSK 1/2

QPSK 3/4

16QAM 1/2

16QAM 3/4

64QAM 2/3

64QAM 3/4

64QAM 5/6

256QAM 3/4

256QAM 5/6

Spatial Streams















Symbol Interval Time







As you can see in the table above, Data Subcarriers, Symbol Interval Time  and Spatial Streams are fixed values for 802.11a & 802.11g. Only Modulation and Coding are variable – therefore only a small number (8) of different data rates are possible for these PHYs.

In 802.11n & 802.11ac, all 5 numbers are variable and can be mixed almost randomly, leading to a massive amount of different possible data rates, making it impossible to keep all of them in mind:



To wrap this blogpost up, here are the calculations of the 5 data rates I mentioned at the beginning of this blogpost:


Data Rate


48 * 1 * 1/2 * 1   / (3.2u + 0.8u)


Lowest possible Data Rate for 802.11a & 802.11g

48 * 6 * 3/4 * 1   / (3.2u + 0.8u)


Highest possible Data Rate for 802.11a & 802.11g

52 * 6 * 5/6 * 2   / (3.2u + 0.4u)


Highest possible Data Rate for 802.11n, where Channel Width = 20MHz, Spatial Streams = 2, Guard Interval = Short

108 * 6 * 5/6 * 2   / (3.2u + 0.4u)


Highest possible Data Rate for 802.11n,

where Channel Width = 40MHz, Spatial Streams = 2, Guard Interval = Short

108 * 8 * 5/6 * 2   / (3.2u + 0.4u)


Highest possible Data Rate for 802.11ac,

where Channel Width = 40MHz, Spatial Streams = 2, Guard Interval = Short

ACI – Overlapping vs. Nonoverlapping Adjacent Channel Interference


Whether we are designing, troubleshooting or just reading about Wireless LANs, we come across the catchword ‘ACI’ (Adjacent Channel Interference).
While most people are familiar with the concept of Overlapping ACI in the 2.4GHz frequency band (‘Don’t use Channel 1 and Channel 2 in neighboring BSSes, because they overlap’ or ‘Only use channels 1,6,11′), the Nonoverlapping ACI is the 2nd and lesser known form of ACI, and impacts not only the 2.4GHz frequency band, but 5GHz aswell.

Overlapping Adjacent Channel Interference

In the 2.4GHz frequency band, overlapping ACI is our daily enemy. The individual channels in this frequency band are separated by only 5MHz (starting at channel 1 @ 2412MHz), but a WiFi channel is at least 20MHz (or 22MHz for 802.11b) wide, so you basically have only channels 1, 6 and 11 available that are not overlapping (some might argue that 1, 5, 9, 13 is an option aswell with OFDM):

When you place neighboring APs on overlapping channels (for example channel 1 and channel 3), you make it impossible for them to properly follow the arbitration process (DCF, ‘Distributed Coordination Function’ or EDCA, ‘Enhanced Distributed Coordination Access’):
The first step in the arbitration process is to check whether or not the medium is clear via physical carrier sense (CCA, ‘clear channel assessment’), which on one hand listens for 802.11 preambles (SD, ‘signal detect’), and on the other hand listens for other RF transmissions that it can’t interpret (ED, ‘energy detect’) . Preambles can be decoded at ~4dB above the noise floor, while the ED kicks in at 20dB above the SD threshold. As an example, if you have a noisefloor of -94dBm and a STA detects RF energy at -70dBm, it will not be able to go past the ‘CCA’ part of the arbitration process.

Think of a political rally where two competitors have a booth in the same room (‘overlapping channel’), but one is extremely loud (‘24dB above noise floor’): The second guy has no chance to talk (‘transmit frames’); he is basically a victim of a DoS (Denial of Service) attack. If they would change it into a discussion and let each other take turn (‘follow the arbitration process by being on the same channel’), both would get equally shared timeslots to speak (‘airtime to send frames’).

Or even better: If they would be in two separate rooms (‘nonoverlapping channels’), both of them could recitate their agenda without having to share the stage (‘airtime’) at all.

Nonoverlapping Adjacent Channel Interference

So why do these ‘nonoverlapping’ channels (despite the name) still interfere? The 2nd type of ACI is noticeable when you look at the ‘Transmit Spectrum Mask’ of a WiFi OFDM signal. A 20MHz-channel does not just ‘stop’ after 20MHz (green) – there are ‘sideband lobes’ (red) that bleed into the neighboring channels:

In the 802.11 standard, it is defined that the interim transmit spectral mask ‘shall have a 0 dBr (dB relative to the maximum spectral density of the signal) bandwidth of 18MHz, -20 dBr at 11 MHz frequency offset, -28 dBr at 20MHz frequency offset, and -40 dBr at 30 MHz frequency offset and above.’ (802.11-2016 subclause – Transmit spectrum mask)

Or translated:
At the offset of 11 MHz from the center frequency, the signal/power has to be reduced by 20 dB (100 times weaker), at the offset of 20 MHz from the center frequency, the signal/power has to be reduced by 28 dB (~630 times weaker), and at the offset of 30 MHz from the center frequency, the signal/power has to be reduced by 40 dB (10’000 times weaker).

At a first glance, this might seem like a lot. But if you for example see a neighboring AP at a signal strength of -40 dBm, a reduction by 20 dB (resp. 28 dB) is still a signal of -60 dBm (resp. -68 dBm), thus impacting the energy detection (ED) of STAs on neighboring channels during the physical carrier sense mentioned above.
You can perfectly see the impact when you do a spectrum analysis on a heavy loaded channel: This busy 52@20 MHz channel bleeds into the whole UNII-1 and UNII-2 band in close proximity of the AP:

Or to use the political rally analogy again:
The two politicians are now in separate rooms (‘nonoverlapping channels’), but they can still hear the other person from the other room (‘neighboring channels’) through the walls. If the politician in one room (‘neighboring channel’) speaks too loud, the other one has to stop talking (‘energy detection during CCA’) because it is too noisy. In order to avoid this interference, the two conference rooms should be farther appart from each other (‘place APs farther appart from each other’) or move one group to another conference area (‘not using adjacent channels’).


Nonoverlapping ACI has its biggest impact in areas where multiple APs are in close proximity, on neighboring channels and the airtime utilization is high:

  • neighboring appartments in big cities
  • high density environments

To reduce the impact of nonoverlapping ACI, you should:

  • not use neighboring channels on neighboring APs wherever possible
  • try not to place APs right next to APs from neighboring ESSs
  • reduce output power of APs (to reduce ACI in your own ESS)

Cheers, Renzo

802.11w-2009 – Protected Management Frames (PMF): Overview And Lab Tests


The goal of the 802.11w amendment, which was ratified in 2009 and consolidated in the IEEE Std 802.11-2012, was to introduce a way to secure management frames against attacks.

The best known attacks this amendment should eliminate are the “deauthentication” and “disassociation” attacks, which can be used for multiple reasons:

  •  L2 DoS attack of a unique client, a BSS or even every single STA that is in the reachable perimeter of the attacker
  •  Forcing clients to authenticate and associate again:
    • can be used to get the SSID from a hidden network by capturing the association request
    • can be used to capture the 4 way handshake, which, in combination with the PSK, enables the attacker to decrypt the L2 frames.
    • can be used for a man in the middle attack by forcing the clients off of the current AP and onto a rogue AP

These attacks are done by the attacker sending out deauthentication or disassociation frames to clients with a faked transmitter address that matches the BSSID of the AP the client is currently connected to (and/or sending out the same frames with the faked client MAC address as the transmitter address to the AP).

You might wonder, why I am writing a blogpost about an amendment that was ratified almost a decade ago… The reason is: it is more relevant than ever! WPA3 is coming, and one of the main bullets for a STA to get the “Wi-Fi CERTIFIED WPA3” is “Require use of Protected Management Frames(PMF)”. So when we want to implement “real WPA3” in the future, we will inevitable face this topic one way or another.

Different Flavours of PMF

The securing of the management frames is done during the 4 way handshake, so PMF is only possible in encrypted networks (PSK or 802.1X).

There are 3 flavours of PMF configurable:

  • None: either not configured/enabled, or not supported
  • Capable/Optional: SSID supports PMF, but lets the client decide if he wants to use it or not
  • Required/Mandatory: SSID supports PMF, and clients can only connect to the SSID if they support PMF aswell

You can check the RSN IE in the Beacon Frames of a network to identify which one of those is configured. In this example, PMF is configured as an option (bit set at “capable”), but is not mandatory (bit not set at “required”):

The cipher suite “BIP” (Broadcast/Multicast Integrity Protocol; AKM Suite 6) is part of the 802.11w extension aswell, and is used to protect multicast and broadcast management frames.

Lab Test

In my lab, I set up an SSID that is PMF-“capable”, and connected 2 clients: one of them supports PMF, the other one doesn’t.
First, I sent deauthentication frames from my AP to the clients, which based on the APs event log still worked:

When you compare the 2 deauthication frames, you can see that both are marked as “Deauthentication” (subtype 0x000c). But the frame sent to the non-PMF-client is all in cleartext…:

…while the frame sent to the PMF-client is encrypted:

For the second test, I tried to simulate an attack by sending out deauthentication frames using aireplay-ng, and it sent out both deauthentication frames in cleartext:

Due to the  “Protected Management Frame” function in place, I was only able to deauthenticate the client not using PMF.


In my opinion, keeping your encrypted WLANs “PMF-free” makes no sense, and you should test and enable the “optional” flavour of PMF if you have not done it yet. It is also a good way to get an overview of your clients’ PMF-capabilities by tracking their association details.

Additionally, I would only recommend to enable “mandatory” if a) you know all your clients and you are sure about their PMF capabilities, or b) you have 2 SSIDs (one for PMF capable clients, and one fallback for “legacy” devices).

If you have any questions or different views about this topic, please let me know.
Cheers, Renzo

ExtremeWireless WiNG: radios on DFS-Channels re-enter Scanning-Mode after being edited


Many times, there are small adjustments you have to implement in your live Wireless LAN environments.
I personally am often faced with changes like these:
– Adding a new SSID
– Removing an unused SSID
– Renaming an existing SSID
– Changing the security settings (for example PSK) of an SSID
– Adjusting the output power on a given radio

In the past few years, I was able to implement those changes in our ExtremeWireless WiNG environments (back then still Zebra Technologies) during working hours, because it would have no impact on the end users – until the new 802.11ac-models were introduced…
While I was testing the new 802.11ac models, I sometimes felt a disruption after changing radio-settings.
After digging a little deeper and checking the event-history, I saw that Access Points running on DFS-Channels went back into radar scan mode after every change in the radio settings:


After Wireless LANs became more popular, the need for more spectrum grew – to reduce CCI and consequently increase throughput.
Because the newly assigned spectrum was already used by radars (weather / flight), and because they have priority over the 802.11 devices, a mechanism was required for Wireless LAN devices to check if a radar is present (and to avoid this channel if there is).
Therefore, DFS (Dynamic Frequency Selection) was introduced in the 802.11h-2003 amendment (alongside with TPC – Transmit Power Control).
DFS basically works like this:
– Initial scan (CAC – Channel Availability Check): Based on the channel and the country you are in, the duration of the CAC can vary (both values are controlled by your local regulatory; for example in Switzerland, it is 60s for channel 52-64, 100-116, 132-140 and 600s for channel 120-128):

– Continuous check: after the initial scan, the AP is allowed to communicate on the scanned DFS-Channel. However, it still has to check for radar, and if the AP detects one, it has to leave the channel immediately.

ExtremeWireless WiNG

So why do the 802.11ac models from ExtremeWireless WiNG re-enter the initial CAC scan after commiting changes, even though they were already communicating on these channels before?
After opening up a case on Extreme Networks, I was told that this behaviour is a hardware limitation:

“This behavior on AP75XX series Access Point is due to Broadcom chip being in use. APs need to recreate SSID mapping for any change in radio setting that could affect mapped SSIDs on either of radios. This is a known hardware limitation.”

While this is not a big problem for APs running on Non-DFS-Channels (2.4GHz, 5GHz Channel 36-48), it clearly causes a disruption for APs running on DFS-Channels, as the radios would go offline for 60 (or even 600) seconds.

With the 3 802.11ac models I had available, I was able to reproduce this issue:

With the 2 802.11n models I had available, the issue did not occur:


The impact of this issue is tremendous – especially if you are running some SSIDs on 5GHz only. This will lead to temporary coverage holes and WiFi-outages after every radio change.
That being said, you only have 3 options with ExtremeWireless WiNG 802.11ac-models:
– Only use Non-DFS-Channels: I definitely would not recommend this, because you loose all the extra spectrum, which will lead to a more congested RF environment and slower throughput rates
– Inform your Customer prior to every change that there will be an outage, and do your radio-changes during working hours
– Only do radio-changes after working hours

I hope this post will help some of you out there. If there are any questions, please let me know.

Cheers, Renzo

New Law For “Professionally Maintained Public Wireless LAN Access Point” In Switzerland


If you intend (or already are) providing a “Professionally Maintained Public Wireless LAN Access Point” in Switzerland, you are affected of the new article (Art. 19 Abs. 2 VÜPF) that became effective on March 1st 2018:

Original: Art. 19 Abs. 2 VÜPF: ‘Die FDA haben bei professionell betriebenen öffentlichen WLAN-Zugangspunkten sicherzustellen, dass alle Endbenutzerinnen und -benutzer mit geeigneten Mitteln identifiziert werden.’
Translated: ‘Telecommunications Service Providers are obligated to identify end users in ‘Professionally Maintained Public Wireless LAN Access Points’ with ‘suitable methods”.

Because this article leaves room for interpretation, a leaflet got published to clarify the four decisive matters of this article:

– What is an “Wireless LAN Access Point”?
– When is a Wireless LAN “public”?
– What defines “professionally maintained”?
– What are “suitable methods” of identification?

This leaflet is in German, so I decided to try my best and translate it into English, to give you a little heads up about the situation here in Switzerland.

What is a “Wireless LAN Access Point”?

They define a “Wireless LAN Access Point” as a method that provides wireless access in public spaces or private areas to the internet, usually based on the IEEE-802.11-Standard.

When is a Wireless LAN “public”?

A Wirelss LAN is public, when it provides network access to third party, even if it is password protected. So a WPA/WPA2-Personal protected Wireless LAN does not suffice if the passcode is printed out hanging on the wall.

What defines “professionally maintained”?

A “Wireless LAN Access Point” is “professionally maintanied”, if it is maintained from an individual or a company that maintains multiple “Public Wireless LANs” on “multiple sites”.
Furthermore, they define “maintain” as providing one or more of the following services:

– configuration management
– remote or on site support
– access control, authorization management, AAA
– monitoring
– software- and firmware-updates
– capacity management
– end user support

What are “suitable methods” of identification?

As an example, the following methods for identification are presented:
– access code sent via SMS (storing of MSISDN)
– credit card
– trusted information from roaming-partner (WISPr, eduroam)
– individual access code per hotel room connected to a guest registration
– boarding card in airports
– frequent flyer program card

Personal opinion

I think, the intention itself is good – in cases where crimes are committed over the internet, they now have a way to identify the people behind it.
Unfortunately, it is not thought through and gives a false sense of security:

– If the criteria of “professionally maintained” is not met, you can do to your Wireless LAN whatever you want. You do not even have to protect it, and people who are up to no good can use it for their bad intentions. This new article punishes the people and companies that professionally provide Wireless LANs, but it is just not consistent.
– Identification of captive portals are usually based on MAC addresses. MAC spoofing is an easy way to piggyback someone elses connection. So basically, even if someone is convicted as the culprit, he can plead that it wasn’t him – and there is no way to be sure.

Cheers Renzo


Merkblatt “WLAN”: https://www.li.admin.ch/sites/default/files/2018-02/Merkblatt%20WLAN.pdf

SR 780.11: https://www.admin.ch/opc/de/classified-compilation/20172173/index.html

AKM Suite Count: 2 – Benefit and Downside of an 802.1X & PSK Hybrid SSID


In this blogpost, I’d like to show you how a single SSID with both 802.1X & PSK enabled looks like, what the use-cases for this constellation are, and what could go wrong when you decide to implement it.

Warning in advance: Use this option with caution – there are some pitfalls!

RSN Information Element

Since the introduction of IEEE 802.11i and the Wi-Fi Alliance certification “WPA2”, the proper way to establish an 802.11 connection between a Client STA and an Access Point looks something like this:


After Open System Authentication and Association, the 3rd step initiates the WPA2 authentication process, based on the authentitation configured (PSK or 802.1X).

Hint: When there is no security enabled (“None“), which is fairly common in guest networks, no further authentication takes place (maybe a captive portal, but this is no “real” authentication).

When you enable PSK or 802.1X authentication on an SSID, the Beacons and Probe Response Frames include the RSN Information Element, which amongst other things defines the type of encryption and authentication:


RSN Information Element with 802.1X Authentication

In an 802.1X-Protected WiFi, inside the RSN Information Element, the AKM Suite Count ist set to 1 (this indicates that only 1 type of authentication is allowed) and the AKM Suite is set to 00-0f-ac:1 (the 1 indicates that the Suite type is 802.1X):


RSN Information Element with PSK Authentication

In a PSK-Protected WiFi, inside the RSN Information Element, the AKM Suite Count is set to 1, too, but the AKM Suite is set to 00-0f-ac:2 (the 2 indicates that the Suite type is PSK):


RSN Information Element with 802.1X & PSK Hybrid Authentication

The 802.11-2016 Standard states (subclause – “AKM suites”), that you are allowed to enable both 802.1X & PSK simultaneously, which allows you to operate an SSID with two different types of authentication methods:


NOTE—Selector values 00-0F-AC:1 and 00-0F-AC:2 can simultaneously be enabled by an Authenticator.


If both 802.1X and PSK authentication are enabled in an SSID, inside the RSN Information Element, the AKM Suite Count is now set to 2 (as there are now 2 allowed types of authentication), and below, both 00-0f-ac:1 and 00-0f-ac:2 are listed as valid AKM Suites:


Use Case for 802.1X & PSK Hybrid Authentication

I personally can see 2 valid reasons to use both 802.1X and PSK within the same SSID:

  • Migrate a WiFi from PSK to 802.1X:

To avoid adding a new/different SSID to your environment when you want to migrate your existing SSID from PSK to 802.1X, you can enable both authentication methods on the same SSID. The hybrid-situation would only be temporary, until every client is reconfigured (either via GPOs or manually). After the migration, you can disable PSK, and it would run as a “pure” 802.1X.

  • Onboarding MDM/BYOD-Devices

We were interested in using this as an option to onboard MDM/BYOD-Devices. Users could then connect their devices to the WiFi by entering a PSK, and their device would end up in a VLAN with restricted access to only our MDM/BYOD-Solution, where they could onboard themselves. During onboarding, they would get a user-certificate and the WiFi-Profile would get overridden to 802.1X. After disabling and enabling the WiFi, the device would reauthenticate, but this time via EAP-TLS, and thanks to Dynamic VLAN Assignment, they would end up in the appropriate VLAN.

Problems with 802.1X & PSK Hybrid Authentication

In the example of MDM/BYOD-Onboardung, there are some fundamental problems:

By enabling both 802.1X & PSK on the same SSID, you give every Client STA 2 options to connect to the WiFi. Well, this doesn’t sound so bad, right? But the problem lies underneath that… The coder of every operating system wants to make everything as simple as possible for end users. So when you click on a new SSID, you don’t have to decide wether you would like to connect to a “WPA Personal”, “WPA2 Personal”, “WPA Enterprise” or “WPA2 Enterprise” network – you automatically get the appropriate prompt to enter your Passphrase or Username/Password. This all happens based on the content in the RSN Information Element from within the Beacon or Probe Response Frames.

But now there are 2 different options in this element – which means that depending on the decision the Client STA makes, you either get a PSK-Prompt or a Username/Password-Prompt.

The workaround for this would be to instruct the users to not click on the WiFi, but rather manually create a new WiFi-Profile on their device on their own with the appropriate settings, which is very complicated for the people that are not really tech-savvy.


The 2nd problem is, that even once the WiFi-Profile was properly configured on the Client STA, some of them were not able to connect to the WiFi. Some of them sent out a Probe Request, but ignored the respective Probe Responses (probably because they thought that it was an AKM Suite mismatch), while others started the PSK 4-Way-Handshake, but after the 2nd Message decided to start an 802.1X Authentication, which of course eventually failed:


After all the tests I did, I have to say, that I would not recommend to enable both 802.1X and PSK on the same SSID in a live-environment – simply because you do not really have control over the end-systems and their behaviour.

Cheers Renzo

How A Hidden SSID Can Impact Your Roaming Experience


While there is the IEEE Standard 802.11 that defines the Wireless LAN technology, there are some settings (let’s call them tweaks) the AP-Manufacturers let you implement that are supported by the vast majority of the client devices, but are not part of the 802.11 Standard itself.
Arguably two of the most popular ones amongst them are:
– Hidden SSID
– Don’t Answer Broadcast Probes

I myself was fond of these settings for a long time, but had to reconsider them after I stumbled across a problem a couple of months ago.

Hidden SSID

Hiding your SSID will remove (null) the SSID Element in the Beacon Frames and Probe Response Frames.

Despite the fact that (on first glance) this looks like a good way to secure your WiFi, it should NOT be considered a security feature: Every person that is a little technophile is able to get his/her hands on your SSID, even if you hide it, because an SSID cannot be removed from Association Request and Reassociation Request Frames.
But hiding the SSID can still be considered a handy feature: if you have a guest WiFi and an internal WiFi, you can prevent guests from accidentally trying to access the internal WiFi by hiding it, and therefore protect your helpdesk from  unnecessary phonecalls.

Don’t Answer Broadcast Probes

As an addition of listening to Beacon Frames (Passive Scanning), a client can send out Probe Request Frames (Active Scanning) to get an overview of the available Wireless LANs in earshot.
Beside the possibility to send a Direct Probe Request to one of its known SSIDs (“Hey “HomeSSID”, are you there?”), there is the option to send Broadcast Probe Requests, where no SSID is specified (“Hey, is somebody here”), upon which every WiFi that hears this request has to answer with a Probe Response Frame.

Because in our case, every client that was allowed to connect to the internal WiFi got the internal WLAN profile manually configured and therefore “knew” its SSID, we decided that the internal SSID should not answer to Broadcast Probe Requests to reduce the amount of unnecessary Management Frames.

How These Settings Can Impact Roaming

Roaming is such a big topic itself that it deserves its own book, so I am not going to open this whole can in this blogpost.
Fact is, that the roaming decision is made by the client – normally when it hits a certain threshold (based on signal strength, SNR, error-rate, etc.). This threshold is different from manufacturer to manufacturer, from driver version to driver version, and they never disclose them, so there is only a guesstimation we can make to say at what point a client is going to roam.
When a client hits this custom threshold, it starts to look for new APs within the same ESS it can roam to – so besides only listening to the Beacon Frames, it starts to send out Probe Requests.
These Probe Requests are usually Direct Probe Request, because the client wants to connected to an AP within the same ESS, and not to a different or new SSID.
But I discovered a client that was new, but had a really cheap 802.11g NIC built in, and unfortunately sent out Broadcast Probe Requests when it wanted to start roaming.

The consequence of this behavior  (in combination with the 2 settings described above)  was, that the client desperately tried to find a new AP it could roam to, but never got an answer, and consequently stayed on its AP until it lost its connection. Ironically, after the disconnection, the client sent out a Direct Probe Request and was able to reconnect to the WiFi again.
But because the client was not able to properly roam and had to disconnect and reconnect every time, there was always a downtime of ~30-60 sec.


I hope this post is helpful for some of you out there facing a similiar problem.

Cheers, Renzo

Violation of 802.11 Standard? – Intel Wireless Cards send “40MHz Intolerant”-Bit in 5GHz


Some of our customers have their offices in multistory buildings with other neighboring companies and residents. As most of you are familiar with, in situations like these, the 2.4GHz frequency band looks like the Wild West – often, people choose a random channel between 1 and 11, and sometimes, they even think that using 40MHz channels in the 2.4GHz frequency band will improve their performance, because “the higher the data-rate, the better”…
To at least get rid of the latter, I thought about setting the “40MHz intolerant”-bit on our managed clients – and did some lab tests with interesting results.

20MHz- vs. 40MHz-Channels – Quick and Dirty

With the 802.11n-2009 Amendement, the possibility to bond two 20MHz-channels together to a single 40MHz-channel was introduced. This feature alone improves the data-rate by 108/52 (by factor ~2.077 – as the number of data-subcarriers increases from 52 to 108).
But this enhancement comes at a cost: when using 40MHz-channels, the number of non-interfering channels decreases. This is not a big problem in the 5GHz frequency band as there are still 9 available 40MHz-channels in Switzerland:

In contrast to the 5GHz frequency band, in the 2.4GHz frequency band, the available spectrum is very limited, and it allows only one 40MHz-channel, which will cause at least CCI, and at worst ACI, as soon as there is a second AP within earshot.

The “40MHz Intolerant”-Bit

Because of the inevitable problems that would occur in the 2.4GHz band when 40MHz-channels are used, the “40MHz intolerant”-bit was introduced as a failsafe-function in the 802.11n-2009 Amendement.

The 40MHz intolerant bit is set in the “HT Capabilities Element” => “HT Capabilities Info Field”, which is a part from the following management frames:
– Beacon
– Probe Request
– Probe Response
– Association Request
– Association Response
– Reassociation Request
– Reassociation Response
This allows either an AP or a client STA to indicate that they do not allow 40MHz-channels in the 2.4GHz frequency band by setting the “40MHz intolerant”-bit to 1, and the neighboring STAs have to adjust to this. Or to put it in simpler terms:

“If you do not have any close neighbors, you can use either 20MHz- or 40MHz-channels – whatever floats you boat. But if there is someone else within earshot that wants to use the 2.4GHz frequency band, you have to adjust your fat channel if he wants you to”

As described in the 802.11-216 subclause 11.16.11 (“Signaling 40 MHz intolerance”), the “40MHz intolerant”-bit is defined to be a feature only for the 2.4GHz frequency band and that it has to be set to 0 in the 5GHz frequency Band:

An HT STA 2G4 shall set the Forty MHz Intolerant field to 1 in transmitted HT Capabilities elements if dot11FortyMHzIntolerant is true; otherwise, the field shall be set to 0.

A STA 2G4 shall set the Forty MHz Intolerant field to 1 in transmitted 20/40 BSS Coexistence fields if dot11FortyMHzIntolerant is true; otherwise, the field shall be set to 0. A STA 2G4 that is not an HT STA 2G4 shall include a 20/40 BSS Coexistence element in Management frames in which the element may be present if dot11FortyMHzIntolerant is present and dot11FortyMHzIntolerant is true.

A STA 5G shall set the Forty MHz Intolerant field to 0 in transmitted HT Capabilities elements and 20/40 BSS Coexistence fields.

Lab Testing

So far so good.

First, I got myself some test notebooks:

Dell Latitude E7440 with Intel Dual Band Wireless-AC 7620 (Driver

Dell Latitude E6440 with Intel Centrino Ultimate-N 6300 AGN (Driver

They both had the setting “Fat Channel Intolerant” available, so I did some research on the Intel website:


It says that it is only available for certain adapters. None of the 2 mentioned above are included, but still, the option is available for both of them in the advanced adapter settings, so I decided to give it a shot anyways.

After I changed the setting on my Intel Centrino Ultimate-N 6300 AGN, I wanted to check the result and did some packet capturing in the 2.4GHz band:

As you can see, the “40MHz intolerant”-bit is set to 1, which is exactly what I wanted.

So the client was connected to the 5GHz band, and was sending its Probe Requests in the 2.4GHz band with the flag set.
But then, I started to see some strange behaviors on the 5GHz-SSID I was connected to – it looked like the SSID was only sending with a 20MHz wide channel instead of the configured 40MHz:

Even a different client that was connected to the same SSID was suddenly only using a 20MHz-channel:

So I started capturing some packets in the 5GHz band and was shocked to see that the STA with the “Fat Channel Intolerant” setting set to “Enabled” was sending the “40MHz intolerant”-bit in the 5GHz band aswell:

To make sure that this is not only a malfunction on one adapter, I did the same tests with the Intel Dual Band Wireless-AC 7260 and got the same results…

I can’t say if this occurs with the adapters listed as “supported” on the Intel website aswell, because I don’t have them available at my place,.

As far as I am concerned (based on my interpretation of the 802.11-2016 subclause 11.16.11), the 2 tested Intel wireless cards do not follow the standard and you should not use the setting “Fat Channel Intolerant”, as it impacts your 5GHz aswell!

I welcome you to give me an explanation about this issue  if you have any- am I right with my statement, or am I off the track?
If you have any questions or feedbacks, please don’t hesitate to write a comment or contact me (but please be kind, as this is my first ever blog post ;-))

Cheers, Renzo