*Remark * This is a problem that arises from nonlinear quantization in AAC. There, MDCT coefficient is power-compressed to before linear quantization. Then we face the problem of choosing boundaries for the th quantization interval where all are quantized to and de-quantized to , such that the de-quantization noise is minimized statistically. Of course, the solution to this problem depends on the underlying probability distribution of . In case of uniform distribution, the de-quantized value should be at the center of the th interval, or . With this restriction, all the boundaries are uniquely determined by . But the length of the interval , as revealed by numerical experiments, almost always undulates and at the same time, asymptotically increases to as ( ), for a randomly chosen . Then, is there a to make the length of the interval monotonically increase? The answer is yes, and this is uniquely determined by , as shown in the following.

*Proof* Let us denote the difference of th and th intervals’ lengths by . We claim that for all . If so, to ensure for all , then we need to show that there exists a unique such that both and exist and are non-negative. First, by the relation , we have . Furthermore, let , then

where and the last equation is due to recursive application of Rolle’s theorem (first about , then about about ). Since for when , we have thus . Next, consider the limit of :

where the last equation is due to for . Thus, the desired , if it ever exits, must ensure . Conversely, if with some , , then for all .

Now, if suffices to show there exists a unique such that . Let , then . We claim that converges as . In that case, we shall have

where and the last equation above follows from when . Therefore, the unique will be . It remains to show that does converge.

Let us investigate the difference between and :

where and the last equation is also due to recursive application of Rolle’s theorem. (First about , then about , and finally about .) Therefore, we have , which implies that converges.

This completes the proof.

]]>

In Matlab, we hold in a 3D matrix , and hold in . Suppose frequency is along the first dimension, then one plain implementation of 3D matrix multiplication is

A1=permute(A,[2 3 1]); B1=permute(B,[2 3 1]); for f=1:F C(f,:,:)=A1(:,:,f)*B1(:,:,f); end

But if is large while is small (for example, ), as in multi-channel Wiener filter, the loop along frequency drags speed down significantly.

The native power of Matlab lies in matrix and vector operation, which is lightning fast. (JIT helps a lot in some cases, but not always.) Here the strategy is to replace the loop along frequency by loop along the second and third dimensions, and vectorize the inner operations:

A1=permute(A,[1 3 2]); for m=1:M for n=1:N C(:,m,n)=sum(A1(:,:,m).*B(:,:,n),2); end end

The total number of iterations now is , instead of . Using this strategy, I observed more than 10X speeding up for , and .

In the special case of , the above code can be simplified further:

B1=squeeze(B); for n=1:N C(:,:,n)=A.*repmat(B1(:,n),1,M); end

Another important case is is a diagonal matrix at each frequency, we can find in one stroke:

C=repmat(A(:,1:M+1:M*M),[1 1 N]).*B;

It feels very good to see results jumping out right away.

]]>

*Proof.* We shall first prove that is dense at , then proceed to the whole range of . Without loss of generality, is assumed to be within .

To show that is dense at , we claim that there exists a such that . Denote the largest integer satisfying , then . If , then is what we need, else let . Denote the largest integer satisfying , then and , that is, is what we need. Then apply this procedure to and so on, which leads to a sequence approaching to at least as fast as .

Further, for any , we can find a such that by the above argument. Then for any , , thus is dense at . This completes the proof.

Alternatively, we can prove Prop. 1 by the Dirichlet’s approximation theorem, which states that for any real number and positive integer , there exit integers and such that and . Thus is dense at and then dense everywhere on .

*Remark* 1. Prop. 1 can be formulated on the unit circle , that is, the set for any irrational number is dense on .

*Remark* 2. The interval can be replaced with where as long as is irrational ( might be rational).

*Remark* 3. If we replace with its subset , where , will still be dense in . This is because , in which is dense in and amounts to a constant (circular) shifting. On the other hand, for any infinite subset , will be dense somewhere in , since is compact, but is not necessarily dense everywhere in , for example, .

**Problem 1.** Let be the largest interval between two adjacent points from on . By Prop. 1, monotonically goes to . But what is the speed of decaying of as goes to infinity? For example, could it be ?

**Problem 2.** Let be an irrational number. Then, for what kind of , is dense in ? Equivalently, we could ask for what kind of subset such that the set is dense in . Specifically, is dense on ? Let be the set of prime numbers, is dense on ?

]]>

I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is…]]>

Surprisingly, I could understand most of this post. Tao’s clarity in exposition is always amazing, in addition to his power in mathematics.

I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.

The method used is the Hardy-Littlewood circle method, which…

View original post 1,690 more words

]]>

But that is so un-Linux and so unsustainable.

It was time to excise my bash-fu and sed-fu. The attacks were logged in */var/log/messages* and looked like this (part):

sshd(pam_unix)[6548]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.101.223.211 user=root

The idea is to periodically check (hourly by cron) the log file and find IPs with a large number (≥ 16) of unsuccessful SSH login attempts. With the following line, the IPs can be parsed out, sorted, and counted:

sed -n "/${SERVICE}.*authentication failure/ s/^.*rhost=\([0-9.]*\).*$/\1/p" "${LOGFILE}" | sort | uniq -c

where SERVICE=sshd and LOGFILE=*/var/log/messages*. The output is like this:

19 112.187.163.188 140 121.101.223.211 45 211.139.10.219

The leading numbers are counts of unsuccessful SSH login attempts, the latter are corresponding IPs. Then echo the IPs to */etc/host.deny* if they are not there yet and their counts are no less than 16. Of course, I should also protect FTP service (vsftpd). Not too hard, ha!

The following is the complete bash script (*ip_block*). Put it into */etc/cron.hourly *and it will work for you 24/7, for free!

#! /bin/bash # add ips with more invalid login attempts than permitted to hosts.deny # usage: ip_block [invalid login limit, default to 16] if [ "$EUID" -ne 0 ] then echo " You must be root to run this script!" exit -1 fi SYSLOG_FILE="/var/log/messages" HOSTDENY_LIST="/etc/hosts.deny" [ "$#" -ge "1" ] && LOGIN_LIMIT="$1" || LOGIN_LIMIT=16 function add_to_list () { SERVICE="$1" LOGFILE="$2" LSTFILE="$3" MAXCOUNT="$4" sed -n "/${SERVICE}.*authentication failure/ s/^.*rhost=\([0-9.]*\).*$/\1/p" "${LOGFILE}" | sort | uniq -c | while read line do count=`echo $line | sed 's/ *.*$//'` ip=`echo $line | sed 's/[0-9]* *//'` n=`grep -c -e "${SERVICE}: *$ip" "${LSTFILE}"` [ "$count" -ge "$MAXCOUNT" -a "$n" -eq "0" ] && echo "${SERVICE}: $ip" >> "${LSTFILE}" done } add_to_list sshd "${SYSLOG_FILE}" "${HOSTDENY_LIST}" $LOGIN_LIMIT add_to_list vsftpd "${SYSLOG_FILE}" "${HOSTDENY_LIST}" $LOGIN_LIMIT

]]>